id
stringlengths 4
8
| url
stringlengths 32
188
| title
stringlengths 2
122
| text
stringlengths 143
226k
|
---|---|---|---|
1260 | https://en.wikipedia.org/wiki/Advanced%20Encryption%20Standard | Advanced Encryption Standard | The Advanced Encryption Standard (AES), also known by its original name Rijndael (), is a specification for the encryption of electronic data established by the U.S. National Institute of Standards and Technology (NIST) in 2001.
AES is a variant of the Rijndael block cipher developed by two Belgian cryptographers, Joan Daemen and Vincent Rijmen, who submitted a proposal to NIST during the AES selection process. Rijndael is a family of ciphers with different key and block sizes. For AES, NIST selected three members of the Rijndael family, each with a block size of 128 bits, but three different key lengths: 128, 192 and 256 bits.
AES has been adopted by the U.S. government. It supersedes the Data Encryption Standard (DES), which was published in 1977. The algorithm described by AES is a symmetric-key algorithm, meaning the same key is used for both encrypting and decrypting the data.
In the United States, AES was announced by the NIST as U.S. FIPS PUB 197 (FIPS 197) on November 26, 2001. This announcement followed a five-year standardization process in which fifteen competing designs were presented and evaluated, before the Rijndael cipher was selected as the most suitable (see Advanced Encryption Standard process for more details).
AES is included in the ISO/IEC 18033-3 standard. AES became effective as a U.S. federal government standard on May 26, 2002, after approval by the U.S. Secretary of Commerce. AES is available in many different encryption packages, and is the first (and only) publicly accessible cipher approved by the U.S. National Security Agency (NSA) for top secret information when used in an NSA approved cryptographic module (see Security of AES, below).
Definitive standards
The Advanced Encryption Standard (AES) is defined in each of:
FIPS PUB 197: Advanced Encryption Standard (AES)
ISO/IEC 18033-3: Block ciphers
Description of the ciphers
AES is based on a design principle known as a substitution–permutation network, and is efficient in both software and hardware. Unlike its predecessor DES, AES does not use a Feistel network. AES is a variant of Rijndael, with a fixed block size of 128 bits, and a key size of 128, 192, or 256 bits. By contrast, Rijndael per se is specified with block and key sizes that may be any multiple of 32 bits, with a minimum of 128 and a maximum of 256 bits.
AES operates on a 4 × 4 column-major order array of bytes, termed the state. Most AES calculations are done in a particular finite field.
For instance, 16 bytes, are represented as this two-dimensional array:
The key size used for an AES cipher specifies the number of transformation rounds that convert the input, called the plaintext, into the final output, called the ciphertext. The number of rounds are as follows:
10 rounds for 128-bit keys.
12 rounds for 192-bit keys.
14 rounds for 256-bit keys.
Each round consists of several processing steps, including one that depends on the encryption key itself. A set of reverse rounds are applied to transform ciphertext back into the original plaintext using the same encryption key.
High-level description of the algorithm
round keys are derived from the cipher key using the AES key schedule. AES requires a separate 128-bit round key block for each round plus one more.
Initial round key addition:
each byte of the state is combined with a byte of the round key using bitwise xor.
9, 11 or 13 rounds:
a non-linear substitution step where each byte is replaced with another according to a lookup table.
a transposition step where the last three rows of the state are shifted cyclically a certain number of steps.
a linear mixing operation which operates on the columns of the state, combining the four bytes in each column.
Final round (making 10, 12 or 14 rounds in total):
The step
In the step, each byte in the state array is replaced with a using an 8-bit substitution box. Note that before round 0, the state array is simply the plaintext/input. This operation provides the non-linearity in the cipher. The S-box used is derived from the multiplicative inverse over , known to have good non-linearity properties. To avoid attacks based on simple algebraic properties, the S-box is constructed by combining the inverse function with an invertible affine transformation. The S-box is also chosen to avoid any fixed points (and so is a derangement), i.e., , and also any opposite fixed points, i.e., .
While performing the decryption, the step (the inverse of ) is used, which requires first taking the inverse of the affine transformation and then finding the multiplicative inverse.
The step
The step operates on the rows of the state; it cyclically shifts the bytes in each row by a certain offset. For AES, the first row is left unchanged. Each byte of the second row is shifted one to the left. Similarly, the third and fourth rows are shifted by offsets of two and three respectively. In this way, each column of the output state of the step is composed of bytes from each column of the input state. The importance of this step is to avoid the columns being encrypted independently, in which case AES would degenerate into four independent block ciphers.
The step
In the step, the four bytes of each column of the state are combined using an invertible linear transformation. The function takes four bytes as input and outputs four bytes, where each input byte affects all four output bytes. Together with , provides diffusion in the cipher.
During this operation, each column is transformed using a fixed matrix (matrix left-multiplied by column gives new value of column in the state):
Matrix multiplication is composed of multiplication and addition of the entries. Entries are bytes treated as coefficients of polynomial of order . Addition is simply XOR. Multiplication is modulo irreducible polynomial . If processed bit by bit, then, after shifting, a conditional XOR with 1B16 should be performed if the shifted value is larger than FF16 (overflow must be corrected by subtraction of generating polynomial). These are special cases of the usual multiplication in .
In more general sense, each column is treated as a polynomial over and is then multiplied modulo with a fixed polynomial . The coefficients are displayed in their hexadecimal equivalent of the binary representation of bit polynomials from . The step can also be viewed as a multiplication by the shown particular MDS matrix in the finite field . This process is described further in the article Rijndael MixColumns.
The step
In the step, the subkey is combined with the state. For each round, a subkey is derived from the main key using Rijndael's key schedule; each subkey is the same size as the state. The subkey is added by combining each byte of the state with the corresponding byte of the subkey using bitwise XOR.
Optimization of the cipher
On systems with 32-bit or larger words, it is possible to speed up execution of this cipher by combining the and steps with the step by transforming them into a sequence of table lookups. This requires four 256-entry 32-bit tables (together occupying 4096 bytes). A round can then be performed with 16 table lookup operations and 12 32-bit exclusive-or operations, followed by four 32-bit exclusive-or operations in the step. Alternatively, the table lookup operation can be performed with a single 256-entry 32-bit table (occupying 1024 bytes) followed by circular rotation operations.
Using a byte-oriented approach, it is possible to combine the , , and steps into a single round operation.
Security
The National Security Agency (NSA) reviewed all the AES finalists, including Rijndael, and stated that all of them were secure enough for U.S. Government non-classified data. In June 2003, the U.S. Government announced that AES could be used to protect classified information:
The design and strength of all key lengths of the AES algorithm (i.e., 128, 192 and 256) are sufficient to protect classified information up to the SECRET level. TOP SECRET information will require use of either the 192 or 256 key lengths. The implementation of AES in products intended to protect national security systems and/or information must be reviewed and certified by NSA prior to their acquisition and use.
AES has 10 rounds for 128-bit keys, 12 rounds for 192-bit keys, and 14 rounds for 256-bit keys.
By 2006, the best known attacks were on 7 rounds for 128-bit keys, 8 rounds for 192-bit keys, and 9 rounds for 256-bit keys.
Known attacks
For cryptographers, a cryptographic "break" is anything faster than a brute-force attack – i.e., performing one trial decryption for each possible key in sequence (see Cryptanalysis). A break can thus include results that are infeasible with current technology. Despite being impractical, theoretical breaks can sometimes provide insight into vulnerability patterns. The largest successful publicly known brute-force attack against a widely implemented block-cipher encryption algorithm was against a 64-bit RC5 key by distributed.net in 2006.
The key space increases by a factor of 2 for each additional bit of key length, and if every possible value of the key is equiprobable, this translates into a doubling of the average brute-force key search time. This implies that the effort of a brute-force search increases exponentially with key length. Key length in itself does not imply security against attacks, since there are ciphers with very long keys that have been found to be vulnerable.
AES has a fairly simple algebraic framework. In 2002, a theoretical attack, named the "XSL attack", was announced by Nicolas Courtois and Josef Pieprzyk, purporting to show a weakness in the AES algorithm, partially due to the low complexity of its nonlinear components. Since then, other papers have shown that the attack, as originally presented, is unworkable; see XSL attack on block ciphers.
During the AES selection process, developers of competing algorithms wrote of Rijndael's algorithm "we are concerned about [its] use ... in security-critical applications." In October 2000, however, at the end of the AES selection process, Bruce Schneier, a developer of the competing algorithm Twofish, wrote that while he thought successful academic attacks on Rijndael would be developed someday, he "did not believe that anyone will ever discover an attack that will allow someone to read Rijndael traffic."
Until May 2009, the only successful published attacks against the full AES were side-channel attacks on some specific implementations. In 2009, a new related-key attack was discovered that exploits the simplicity of AES's key schedule and has a complexity of 2119. In December 2009 it was improved to 299.5. This is a follow-up to an attack discovered earlier in 2009 by Alex Biryukov, Dmitry Khovratovich, and Ivica Nikolić, with a complexity of 296 for one out of every 235 keys. However, related-key attacks are not of concern in any properly designed cryptographic protocol, as a properly designed protocol (i.e., implementational software) will take care not to allow related keys, essentially by constraining an attacker's means of selecting keys for relatedness.
Another attack was blogged by Bruce Schneier
on July 30, 2009, and released as a preprint
on August 3, 2009. This new attack, by Alex Biryukov, Orr Dunkelman, Nathan Keller, Dmitry Khovratovich, and Adi Shamir, is against AES-256 that uses only two related keys and 239 time to recover the complete 256-bit key of a 9-round version, or 245 time for a 10-round version with a stronger type of related subkey attack, or 270 time for an 11-round version. 256-bit AES uses 14 rounds, so these attacks are not effective against full AES.
The practicality of these attacks with stronger related keys has been criticized, for instance, by the paper on chosen-key-relations-in-the-middle attacks on AES-128 authored by Vincent Rijmen in 2010.
In November 2009, the first known-key distinguishing attack against a reduced 8-round version of AES-128 was released as a preprint.
This known-key distinguishing attack is an improvement of the rebound, or the start-from-the-middle attack, against AES-like permutations, which view two consecutive rounds of permutation as the application of a so-called Super-S-box. It works on the 8-round version of AES-128, with a time complexity of 248, and a memory complexity of 232. 128-bit AES uses 10 rounds, so this attack is not effective against full AES-128.
The first key-recovery attacks on full AES were by Andrey Bogdanov, Dmitry Khovratovich, and Christian Rechberger, and were published in 2011. The attack is a biclique attack and is faster than brute force by a factor of about four. It requires 2126.2 operations to recover an AES-128 key. For AES-192 and AES-256, 2190.2 and 2254.6 operations are needed, respectively. This result has been further improved to 2126.0 for AES-128, 2189.9 for AES-192 and 2254.3 for AES-256, which are the current best results in key recovery attack against AES.
This is a very small gain, as a 126-bit key (instead of 128-bits) would still take billions of years to brute force on current and foreseeable hardware. Also, the authors calculate the best attack using their technique on AES with a 128-bit key requires storing 288 bits of data. That works out to about 38 trillion terabytes of data, which is more than all the data stored on all the computers on the planet in 2016. As such, there are no practical implications on AES security. The space complexity has later been improved to 256 bits, which is 9007 terabytes.
According to the Snowden documents, the NSA is doing research on whether a cryptographic attack based on tau statistic may help to break AES.
At present, there is no known practical attack that would allow someone without knowledge of the key to read data encrypted by AES when correctly implemented.
Side-channel attacks
Side-channel attacks do not attack the cipher as a black box, and thus are not related to cipher security as defined in the classical context, but are important in practice. They attack implementations of the cipher on hardware or software systems that inadvertently leak data. There are several such known attacks on various implementations of AES.
In April 2005, D. J. Bernstein announced a cache-timing attack that he used to break a custom server that used OpenSSL's AES encryption. The attack required over 200 million chosen plaintexts. The custom server was designed to give out as much timing information as possible (the server reports back the number of machine cycles taken by the encryption operation). However, as Bernstein pointed out, "reducing the precision of the server's timestamps, or eliminating them from the server's responses, does not stop the attack: the client simply uses round-trip timings based on its local clock, and compensates for the increased noise by averaging over a larger number of samples".
In October 2005, Dag Arne Osvik, Adi Shamir and Eran Tromer presented a paper demonstrating several cache-timing attacks against the implementations in AES found in OpenSSL and Linux's dm-crypt partition encryption function. One attack was able to obtain an entire AES key after only 800 operations triggering encryptions, in a total of 65 milliseconds. This attack requires the attacker to be able to run programs on the same system or platform that is performing AES.
In December 2009 an attack on some hardware implementations was published that used differential fault analysis and allows recovery of a key with a complexity of 232.
In November 2010 Endre Bangerter, David Gullasch and Stephan Krenn published a paper which described a practical approach to a "near real time" recovery of secret keys from AES-128 without the need for either cipher text or plaintext. The approach also works on AES-128 implementations that use compression tables, such as OpenSSL. Like some earlier attacks, this one requires the ability to run unprivileged code on the system performing the AES encryption, which may be achieved by malware infection far more easily than commandeering the root account.
In March 2016, Ashokkumar C., Ravi Prakash Giri and Bernard Menezes presented a side-channel attack on AES implementations that can recover the complete 128-bit AES key in just 6–7 blocks of plaintext/ciphertext, which is a substantial improvement over previous works that require between 100 and a million encryptions. The proposed attack requires standard user privilege and key-retrieval algorithms run under a minute.
Many modern CPUs have built-in hardware instructions for AES, which protect against timing-related side-channel attacks.
NIST/CSEC validation
The Cryptographic Module Validation Program (CMVP) is operated jointly by the United States Government's National Institute of Standards and Technology (NIST) Computer Security Division and the Communications Security Establishment (CSE) of the Government of Canada. The use of cryptographic modules validated to NIST FIPS 140-2 is required by the United States Government for encryption of all data that has a classification of Sensitive but Unclassified (SBU) or above. From NSTISSP #11, National Policy Governing the Acquisition of Information Assurance: “Encryption products for protecting classified information will be certified by NSA, and encryption products intended for protecting sensitive information will be certified in accordance with NIST FIPS 140-2.”
The Government of Canada also recommends the use of FIPS 140 validated cryptographic modules in unclassified applications of its departments.
Although NIST publication 197 (“FIPS 197”) is the unique document that covers the AES algorithm, vendors typically approach the CMVP under FIPS 140 and ask to have several algorithms (such as Triple DES or SHA1) validated at the same time. Therefore, it is rare to find cryptographic modules that are uniquely FIPS 197 validated and NIST itself does not generally take the time to list FIPS 197 validated modules separately on its public web site. Instead, FIPS 197 validation is typically just listed as an "FIPS approved: AES" notation (with a specific FIPS 197 certificate number) in the current list of FIPS 140 validated cryptographic modules.
The Cryptographic Algorithm Validation Program (CAVP) allows for independent validation of the correct implementation of the AES algorithm. Successful validation results in being listed on the NIST validations page. This testing is a pre-requisite for the FIPS 140-2 module validation described below. However, successful CAVP validation in no way implies that the cryptographic module implementing the algorithm is secure. A cryptographic module lacking FIPS 140-2 validation or specific approval by the NSA is not deemed secure by the US Government and cannot be used to protect government data.
FIPS 140-2 validation is challenging to achieve both technically and fiscally. There is a standardized battery of tests as well as an element of source code review that must be passed over a period of a few weeks. The cost to perform these tests through an approved laboratory can be significant (e.g., well over $30,000 US) and does not include the time it takes to write, test, document and prepare a module for validation. After validation, modules must be re-submitted and re-evaluated if they are changed in any way. This can vary from simple paperwork updates if the security functionality did not change to a more substantial set of re-testing if the security functionality was impacted by the change.
Test vectors
Test vectors are a set of known ciphers for a given input and key. NIST distributes the reference of AES test vectors as AES Known Answer Test (KAT) Vectors.
Performance
High speed and low RAM requirements were criteria of the AES selection process. As the chosen algorithm, AES performed well on a wide variety of hardware, from 8-bit smart cards to high-performance computers.
On a Pentium Pro, AES encryption requires 18 clock cycles per byte, equivalent to a throughput of about 11 MB/s for a 200 MHz processor.
On Intel Core and AMD Ryzen CPUs supporting AES-NI instruction set extensions, throughput can be multiple GB/s (even over 10 GB/s).
Implementations
See also
AES modes of operation
Disk encryption
Network encryption
Whirlpool – hash function created by Vincent Rijmen and Paulo S. L. M. Barreto
List of free and open-source software packages
Notes
References
alternate link (companion web site contains online lectures on AES)
External links
AES algorithm archive information – (old, unmaintained)
Animation of Rijndael – AES deeply explained and animated using Flash (by Enrique Zabala / University ORT / Montevideo / Uruguay). This animation (in English, Spanish, and German) is also part of CrypTool 1 (menu Indiv. Procedures → Visualization of Algorithms → AES).
HTML5 Animation of Rijndael – Same Animation as above made in HTML5.
Advanced Encryption Standard
Cryptography |
1368 | https://en.wikipedia.org/wiki/Assembly%20language | Assembly language | In computer programming, assembly language (or assembler language), sometimes abbreviated asm, is any low-level programming language in which there is a very strong correspondence between the instructions in the language and the architecture's machine code instructions. Assembly language usually has one statement per machine instruction (1:1), but constants, comments, assembler directives, symbolic labels of, e.g., memory locations, registers, and macros are generally also supported.
Assembly code is converted into executable machine code by a utility program referred to as an assembler. The term "assembler" is generally attributed to Wilkes, Wheeler and Gill in their 1951 book The Preparation of Programs for an Electronic Digital Computer, who, however, used the term to mean "a program that assembles another program consisting of several sections into a single program". The conversion process is referred to as assembly, as in assembling the source code. The computational step when an assembler is processing a program is called assembly time. Assembly language may also be called symbolic machine code.
Because assembly depends on the machine code instructions, each assembly language is specific to a particular computer architecture.
Sometimes there is more than one assembler for the same architecture, and sometimes an assembler is specific to an operating system or to particular operating systems. Most assembly languages do not provide specific syntax for operating system calls, and most assembly languages can be used universally with any operating system, as the language provides access to all the real capabilities of the processor, upon which all system call mechanisms ultimately rest. In contrast to assembly languages, most high-level programming languages are generally portable across multiple architectures but require interpreting or compiling, a much more complicated task than assembling.
Assembly language syntax
Assembly language uses a mnemonic to represent, e.g., each low-level machine instruction or opcode, each directive, typically also each architectural register, flag, etc. Some of the mnemonics may be built in and some user defined. Many operations require one or more operands in order to form a complete instruction. Most assemblers permit named constants, registers, and labels for program and memory locations, and can calculate expressions for operands. Thus, programmers are freed from tedious repetitive calculations and assembler programs are much more readable than machine code. Depending on the architecture, these elements may also be combined for specific instructions or addressing modes using offsets or other data as well as fixed addresses. Many assemblers offer additional mechanisms to facilitate program development, to control the assembly process, and to aid debugging.
Some are column oriented, with specific fields in specific columns; this was very common for machines using punched cards in the 1950s and early 1960s. Some assemblers have free-form syntax, with fields separated by delimiters, e.g., punctuation, white space. Some assemblers are hybrid, with, e.g., labels, in a specific column and other fields separated by delimiters; this became more common than column oriented syntax in the 1960s.
IBM System/360
All of the IBM assemblers for System/360, by default, have a label in column 1, fields separated by delimiters in columns 2-71, a continuation indicator in column 72 and a sequence number in columns 73-80. The delimiter for label, opcode, operands and comments is spaces, while individual operands are separated by commas and parentheses.
Terminology
A macro assembler is an assembler that includes a macroinstruction facility so that (parameterized) assembly language text can be represented by a name, and that name can be used to insert the expanded text into other code.
Open code refers to any assembler input outside of a macro definition.
A cross assembler (see also cross compiler) is an assembler that is run on a computer or operating system (the host system) of a different type from the system on which the resulting code is to run (the target system). Cross-assembling facilitates the development of programs for systems that do not have the resources to support software development, such as an embedded system or a microcontroller. In such a case, the resulting object code must be transferred to the target system, via read-only memory (ROM, EPROM, etc.), a programmer (when the read-only memory is integrated in the device, as in microcontrollers), or a data link using either an exact bit-by-bit copy of the object code or a text-based representation of that code (such as Intel hex or Motorola S-record).
A high-level assembler is a program that provides language abstractions more often associated with high-level languages, such as advanced control structures (IF/THEN/ELSE, DO CASE, etc.) and high-level abstract data types, including structures/records, unions, classes, and sets.
A microassembler is a program that helps prepare a microprogram, called firmware, to control the low level operation of a computer.
A meta-assembler is "a program that accepts the syntactic and semantic description of an assembly language, and generates an assembler for that language", or that accepts an assembler source file along with such a description and assembles the source file in accordance with that description. "Meta-Symbol" assemblers for the SDS 9 Series and SDS Sigma series of computers are meta-assemblers. Sperry Univac also provided a Meta-Assembler for the UNIVAC 1100/2200 series.
inline assembler (or embedded assembler) is assembler code contained within a high-level language program. This is most often used in systems programs which need direct access to the hardware.
Key concepts
Assembler
An assembler program creates object code by translating combinations of mnemonics and syntax for operations and addressing modes into their numerical equivalents. This representation typically includes an operation code ("opcode") as well as other control bits and data. The assembler also calculates constant expressions and resolves symbolic names for memory locations and other entities. The use of symbolic references is a key feature of assemblers, saving tedious calculations and manual address updates after program modifications. Most assemblers also include macro facilities for performing textual substitution – e.g., to generate common short sequences of instructions as inline, instead of called subroutines.
Some assemblers may also be able to perform some simple types of instruction set-specific optimizations. One concrete example of this may be the ubiquitous x86 assemblers from various vendors. Called jump-sizing, most of them are able to perform jump-instruction replacements (long jumps replaced by short or relative jumps) in any number of passes, on request. Others may even do simple rearrangement or insertion of instructions, such as some assemblers for RISC architectures that can help optimize a sensible instruction scheduling to exploit the CPU pipeline as efficiently as possible.
Assemblers have been available since the 1950s, as the first step above machine language and before high-level programming languages such as Fortran, Algol, COBOL and Lisp. There have also been several classes of translators and semi-automatic code generators with properties similar to both assembly and high-level languages, with Speedcode as perhaps one of the better-known examples.
There may be several assemblers with different syntax for a particular CPU or instruction set architecture. For instance, an instruction to add memory data to a register in a x86-family processor might be add eax,[ebx], in original Intel syntax, whereas this would be written addl (%ebx),%eax in the AT&T syntax used by the GNU Assembler. Despite different appearances, different syntactic forms generally generate the same numeric machine code. A single assembler may also have different modes in order to support variations in syntactic forms as well as their exact semantic interpretations (such as FASM-syntax, TASM-syntax, ideal mode, etc., in the special case of x86 assembly programming).
Number of passes
There are two types of assemblers based on how many passes through the source are needed (how many times the assembler reads the source) to produce the object file.
One-pass assemblers go through the source code once. Any symbol used before it is defined will require "errata" at the end of the object code (or, at least, no earlier than the point where the symbol is defined) telling the linker or the loader to "go back" and overwrite a placeholder which had been left where the as yet undefined symbol was used.
Multi-pass assemblers create a table with all symbols and their values in the first passes, then use the table in later passes to generate code.
In both cases, the assembler must be able to determine the size of each instruction on the initial passes in order to calculate the addresses of subsequent symbols. This means that if the size of an operation referring to an operand defined later depends on the type or distance of the operand, the assembler will make a pessimistic estimate when first encountering the operation, and if necessary, pad it with one or more
"no-operation" instructions in a later pass or the errata. In an assembler with peephole optimization, addresses may be recalculated between passes to allow replacing pessimistic code with code tailored to the exact distance from the target.
The original reason for the use of one-pass assemblers was memory size and speed of assembly – often a second pass would require storing the symbol table in memory (to handle forward references), rewinding and rereading the program source on tape, or rereading a deck of cards or punched paper tape. Later computers with much larger memories (especially disc storage), had the space to perform all necessary processing without such re-reading. The advantage of the multi-pass assembler is that the absence of errata makes the linking process (or the program load if the assembler directly produces executable code) faster.
Example: in the following code snippet, a one-pass assembler would be able to determine the address of the backward reference BKWD when assembling statement S2, but would not be able to determine the address of the forward reference FWD when assembling the branch statement S1; indeed, FWD may be undefined. A two-pass assembler would determine both addresses in pass 1, so they would be known when generating code in pass 2.
B
...
EQU *
...
EQU *
...
B
High-level assemblers
More sophisticated high-level assemblers provide language abstractions such as:
High-level procedure/function declarations and invocations
Advanced control structures (IF/THEN/ELSE, SWITCH)
High-level abstract data types, including structures/records, unions, classes, and sets
Sophisticated macro processing (although available on ordinary assemblers since the late 1950s for, e.g., the IBM 700 series and IBM 7000 series, and since the 1960s for IBM System/360 (S/360), amongst other machines)
Object-oriented programming features such as classes, objects, abstraction, polymorphism, and inheritance
See Language design below for more details.
Assembly language
A program written in assembly language consists of a series of mnemonic processor instructions and meta-statements (known variously as declarative operations, directives, pseudo-instructions, pseudo-operations and pseudo-ops), comments and data. Assembly language instructions usually consist of an opcode mnemonic followed by an operand, which might be a list of data, arguments or parameters. Some instructions may be "implied," which means the data upon which the instruction operates is implicitly defined by the instruction itself—such an instruction does not take an operand. The resulting statement is translated by an assembler into machine language instructions that can be loaded into memory and executed.
For example, the instruction below tells an x86/IA-32 processor to move an immediate 8-bit value into a register. The binary code for this instruction is 10110 followed by a 3-bit identifier for which register to use. The identifier for the AL register is 000, so the following machine code loads the AL register with the data 01100001.
10110000 01100001
This binary computer code can be made more human-readable by expressing it in hexadecimal as follows.
B0 61
Here, B0 means 'Move a copy of the following value into AL, and 61 is a hexadecimal representation of the value 01100001, which is 97 in decimal. Assembly language for the 8086 family provides the mnemonic MOV (an abbreviation of move) for instructions such as this, so the machine code above can be written as follows in assembly language, complete with an explanatory comment if required, after the semicolon. This is much easier to read and to remember.
MOV AL, 61h ; Load AL with 97 decimal (61 hex)
In some assembly languages (including this one) the same mnemonic, such as MOV, may be used for a family of related instructions for loading, copying and moving data, whether these are immediate values, values in registers, or memory locations pointed to by values in registers or by immediate (a.k.a direct) addresses. Other assemblers may use separate opcode mnemonics such as L for "move memory to register", ST for "move register to memory", LR for "move register to register", MVI for "move immediate operand to memory", etc.
If the same mnemonic is used for different instructions, that means that the mnemonic corresponds to several different binary instruction codes, excluding data (e.g. the 61h in this example), depending on the operands that follow the mnemonic. For example, for the x86/IA-32 CPUs, the Intel assembly language syntax MOV AL, AH represents an instruction that moves the contents of register AH into register AL. The hexadecimal form of this instruction is:
88 E0
The first byte, 88h, identifies a move between a byte-sized register and either another register or memory, and the second byte, E0h, is encoded (with three bit-fields) to specify that both operands are registers, the source is AH, and the destination is AL.
In a case like this where the same mnemonic can represent more than one binary instruction, the assembler determines which instruction to generate by examining the operands. In the first example, the operand 61h is a valid hexadecimal numeric constant and is not a valid register name, so only the B0 instruction can be applicable. In the second example, the operand AH is a valid register name and not a valid numeric constant (hexadecimal, decimal, octal, or binary), so only the 88 instruction can be applicable.
Assembly languages are always designed so that this sort of unambiguousness is universally enforced by their syntax. For example, in the Intel x86 assembly language, a hexadecimal constant must start with a numeral digit, so that the hexadecimal number 'A' (equal to decimal ten) would be written as 0Ah or 0AH, not AH, specifically so that it cannot appear to be the name of register AH. (The same rule also prevents ambiguity with the names of registers BH, CH, and DH, as well as with any user-defined symbol that ends with the letter H and otherwise contains only characters that are hexadecimal digits, such as the word "BEACH".)
Returning to the original example, while the x86 opcode 10110000 (B0) copies an 8-bit value into the AL register, 10110001 (B1) moves it into CL and 10110010 (B2) does so into DL. Assembly language examples for these follow.
MOV AL, 1h ; Load AL with immediate value 1
MOV CL, 2h ; Load CL with immediate value 2
MOV DL, 3h ; Load DL with immediate value 3
The syntax of MOV can also be more complex as the following examples show.
MOV EAX, [EBX] ; Move the 4 bytes in memory at the address contained in EBX into EAX
MOV [ESI+EAX], CL ; Move the contents of CL into the byte at address ESI+EAX
MOV DS, DX ; Move the contents of DX into segment register DS
In each case, the MOV mnemonic is translated directly into one of the opcodes 88-8C, 8E, A0-A3, B0-BF, C6 or C7 by an assembler, and the programmer normally does not have to know or remember which.
Transforming assembly language into machine code is the job of an assembler, and the reverse can at least partially be achieved by a disassembler. Unlike high-level languages, there is a one-to-one correspondence between many simple assembly statements and machine language instructions. However, in some cases, an assembler may provide pseudoinstructions (essentially macros) which expand into several machine language instructions to provide commonly needed functionality. For example, for a machine that lacks a "branch if greater or equal" instruction, an assembler may provide a pseudoinstruction that expands to the machine's "set if less than" and "branch if zero (on the result of the set instruction)". Most full-featured assemblers also provide a rich macro language (discussed below) which is used by vendors and programmers to generate more complex code and data sequences. Since the information about pseudoinstructions and macros defined in the assembler environment is not present in the object program, a disassembler cannot reconstruct the macro and pseudoinstruction invocations but can only disassemble the actual machine instructions that the assembler generated from those abstract assembly-language entities. Likewise, since comments in the assembly language source file are ignored by the assembler and have no effect on the object code it generates, a disassembler is always completely unable to recover source comments.
Each computer architecture has its own machine language. Computers differ in the number and type of operations they support, in the different sizes and numbers of registers, and in the representations of data in storage. While most general-purpose computers are able to carry out essentially the same functionality, the ways they do so differ; the corresponding assembly languages reflect these differences.
Multiple sets of mnemonics or assembly-language syntax may exist for a single instruction set, typically instantiated in different assembler programs. In these cases, the most popular one is usually that supplied by the CPU manufacturer and used in its documentation.
Two examples of CPUs that have two different sets of mnemonics are the Intel 8080 family and the Intel 8086/8088. Because Intel claimed copyright on its assembly language mnemonics (on each page of their documentation published in the 1970s and early 1980s, at least), some companies that independently produced CPUs compatible with Intel instruction sets invented their own mnemonics. The Zilog Z80 CPU, an enhancement of the Intel 8080A, supports all the 8080A instructions plus many more; Zilog invented an entirely new assembly language, not only for the new instructions but also for all of the 8080A instructions. For example, where Intel uses the mnemonics MOV, MVI, LDA, STA, LXI, LDAX, STAX, LHLD, and SHLD for various data transfer instructions, the Z80 assembly language uses the mnemonic LD for all of them. A similar case is the NEC V20 and V30 CPUs, enhanced copies of the Intel 8086 and 8088, respectively. Like Zilog with the Z80, NEC invented new mnemonics for all of the 8086 and 8088 instructions, to avoid accusations of infringement of Intel's copyright. (It is questionable whether such copyrights can be valid, and later CPU companies such as AMD and Cyrix republished Intel's x86/IA-32 instruction mnemonics exactly with neither permission nor legal penalty.) It is doubtful whether in practice many people who programmed the V20 and V30 actually wrote in NEC's assembly language rather than Intel's; since any two assembly languages for the same instruction set architecture are isomorphic (somewhat like English and Pig Latin), there is no requirement to use a manufacturer's own published assembly language with that manufacturer's products.
Language design
Basic elements
There is a large degree of diversity in the way the authors of assemblers categorize statements and in the nomenclature that they use. In particular, some describe anything other than a machine mnemonic or extended mnemonic as a pseudo-operation (pseudo-op). A typical assembly language consists of 3 types of instruction statements that are used to define program operations:
Opcode mnemonics
Data definitions
Assembly directives
Opcode mnemonics and extended mnemonics
Instructions (statements) in assembly language are generally very simple, unlike those in high-level languages. Generally, a mnemonic is a symbolic name for a single executable machine language instruction (an opcode), and there is at least one opcode mnemonic defined for each machine language instruction. Each instruction typically consists of an operation or opcode plus zero or more operands. Most instructions refer to a single value or a pair of values. Operands can be immediate (value coded in the instruction itself), registers specified in the instruction or implied, or the addresses of data located elsewhere in storage. This is determined by the underlying processor architecture: the assembler merely reflects how this architecture works. Extended mnemonics are often used to specify a combination of an opcode with a specific operand, e.g., the System/360 assemblers use as an extended mnemonic for with a mask of 15 and ("NO OPeration" – do nothing for one step) for with a mask of 0.
Extended mnemonics are often used to support specialized uses of instructions, often for purposes not obvious from the instruction name. For example, many CPU's do not have an explicit NOP instruction, but do have instructions that can be used for the purpose. In 8086 CPUs the instruction is used for , with being a pseudo-opcode to encode the instruction . Some disassemblers recognize this and will decode the instruction as . Similarly, IBM assemblers for System/360 and System/370 use the extended mnemonics and for and with zero masks. For the SPARC architecture, these are known as synthetic instructions.
Some assemblers also support simple built-in macro-instructions that generate two or more machine instructions. For instance, with some Z80 assemblers the instruction is recognized to generate followed by . These are sometimes known as pseudo-opcodes.
Mnemonics are arbitrary symbols; in 1985 the IEEE published Standard 694 for a uniform set of mnemonics to be used by all assemblers. The standard has since been withdrawn.
Data directives
There are instructions used to define data elements to hold data and variables. They define the type of data, the length and the alignment of data. These instructions can also define whether the data is available to outside programs (programs assembled separately) or only to the program in which the data section is defined. Some assemblers classify these as pseudo-ops.
Assembly directives
Assembly directives, also called pseudo-opcodes, pseudo-operations or pseudo-ops, are commands given to an assembler "directing it to perform operations other than assembling instructions". Directives affect how the assembler operates and "may affect the object code, the symbol table, the listing file, and the values of internal assembler parameters". Sometimes the term pseudo-opcode is reserved for directives that generate object code, such as those that generate data.
The names of pseudo-ops often start with a dot to distinguish them from machine instructions. Pseudo-ops can make the assembly of the program dependent on parameters input by a programmer, so that one program can be assembled in different ways, perhaps for different applications. Or, a pseudo-op can be used to manipulate presentation of a program to make it easier to read and maintain. Another common use of pseudo-ops is to reserve storage areas for run-time data and optionally initialize their contents to known values.
Symbolic assemblers let programmers associate arbitrary names (labels or symbols) with memory locations and various constants. Usually, every constant and variable is given a name so instructions can reference those locations by name, thus promoting self-documenting code. In executable code, the name of each subroutine is associated with its entry point, so any calls to a subroutine can use its name. Inside subroutines, GOTO destinations are given labels. Some assemblers support local symbols which are often lexically distinct from normal symbols (e.g., the use of "10$" as a GOTO destination).
Some assemblers, such as NASM, provide flexible symbol management, letting programmers manage different namespaces, automatically calculate offsets within data structures, and assign labels that refer to literal values or the result of simple computations performed by the assembler. Labels can also be used to initialize constants and variables with relocatable addresses.
Assembly languages, like most other computer languages, allow comments to be added to program source code that will be ignored during assembly. Judicious commenting is essential in assembly language programs, as the meaning and purpose of a sequence of binary machine instructions can be difficult to determine. The "raw" (uncommented) assembly language generated by compilers or disassemblers is quite difficult to read when changes must be made.
Macros
Many assemblers support predefined macros, and others support programmer-defined (and repeatedly re-definable) macros involving sequences of text lines in which variables and constants are embedded. The macro definition is most commonly a mixture of assembler statements, e.g., directives, symbolic machine instructions, and templates for assembler statements. This sequence of text lines may include opcodes or directives. Once a macro has been defined its name may be used in place of a mnemonic. When the assembler processes such a statement, it replaces the statement with the text lines associated with that macro, then processes them as if they existed in the source code file (including, in some assemblers, expansion of any macros existing in the replacement text). Macros in this sense date to IBM autocoders of the 1950s.
Macro assemblers typically have directives to, e.g., define macros, define variables, set variables to the result of an arithmetic, logical or string expression, iterate, conditionally generate code. Some of those directives may be restricted to use within a macro definition, e.g., MEXIT in HLASM, while others may be permitted within open code (outside macro definitions), e.g., AIF and COPY in HLASM.
In assembly language, the term "macro" represents a more comprehensive concept than it does in some other contexts, such as the pre-processor in the C programming language, where its #define directive typically is used to create short single line macros. Assembler macro instructions, like macros in PL/I and some other languages, can be lengthy "programs" by themselves, executed by interpretation by the assembler during assembly.
Since macros can have 'short' names but expand to several or indeed many lines of code, they can be used to make assembly language programs appear to be far shorter, requiring fewer lines of source code, as with higher level languages. They can also be used to add higher levels of structure to assembly programs, optionally introduce embedded debugging code via parameters and other similar features.
Macro assemblers often allow macros to take parameters. Some assemblers include quite sophisticated macro languages, incorporating such high-level language elements as optional parameters, symbolic variables, conditionals, string manipulation, and arithmetic operations, all usable during the execution of a given macro, and allowing macros to save context or exchange information. Thus a macro might generate numerous assembly language instructions or data definitions, based on the macro arguments. This could be used to generate record-style data structures or "unrolled" loops, for example, or could generate entire algorithms based on complex parameters. For instance, a "sort" macro could accept the specification of a complex sort key and generate code crafted for that specific key, not needing the run-time tests that would be required for a general procedure interpreting the specification. An organization using assembly language that has been heavily extended using such a macro suite can be considered to be working in a higher-level language since such programmers are not working with a computer's lowest-level conceptual elements. Underlining this point, macros were used to implement an early virtual machine in SNOBOL4 (1967), which was written in the SNOBOL Implementation Language (SIL), an assembly language for a virtual machine. The target machine would translate this to its native code using a macro assembler. This allowed a high degree of portability for the time.
Macros were used to customize large scale software systems for specific customers in the mainframe era and were also used by customer personnel to satisfy their employers' needs by making specific versions of manufacturer operating systems. This was done, for example, by systems programmers working with IBM's Conversational Monitor System / Virtual Machine (VM/CMS) and with IBM's "real time transaction processing" add-ons, Customer Information Control System CICS, and ACP/TPF, the airline/financial system that began in the 1970s and still runs many large computer reservation systems (CRS) and credit card systems today.
It is also possible to use solely the macro processing abilities of an assembler to generate code written in completely different languages, for example, to generate a version of a program in COBOL using a pure macro assembler program containing lines of COBOL code inside assembly time operators instructing the assembler to generate arbitrary code. IBM OS/360 uses macros to perform system generation. The user specifies options by coding a series of assembler macros. Assembling these macros generates a job stream to build the system, including job control language and utility control statements.
This is because, as was realized in the 1960s, the concept of "macro processing" is independent of the concept of "assembly", the former being in modern terms more word processing, text processing, than generating object code. The concept of macro processing appeared, and appears, in the C programming language, which supports "preprocessor instructions" to set variables, and make conditional tests on their values. Unlike certain previous macro processors inside assemblers, the C preprocessor is not Turing-complete because it lacks the ability to either loop or "go to", the latter allowing programs to loop.
Despite the power of macro processing, it fell into disuse in many high level languages (major exceptions being C, C++ and PL/I) while remaining a perennial for assemblers.
Macro parameter substitution is strictly by name: at macro processing time, the value of a parameter is textually substituted for its name. The most famous class of bugs resulting was the use of a parameter that itself was an expression and not a simple name when the macro writer expected a name. In the macro:
foo: macro a
load a*b
the intention was that the caller would provide the name of a variable, and the "global" variable or constant b would be used to multiply "a". If foo is called with the parameter a-c, the macro expansion of load a-c*b occurs. To avoid any possible ambiguity, users of macro processors can parenthesize formal parameters inside macro definitions, or callers can parenthesize the input parameters.
Support for structured programming
Packages of macros have been written providing structured programming elements to encode execution flow. The earliest example of this approach was in the Concept-14 macro set, originally proposed by Harlan Mills (March 1970), and implemented by Marvin Kessler at IBM's Federal Systems Division, which provided IF/ELSE/ENDIF and similar control flow blocks for OS/360 assembler programs. This was a way to reduce or eliminate the use of GOTO operations in assembly code, one of the main factors causing spaghetti code in assembly language. This approach was widely accepted in the early 1980s (the latter days of large-scale assembly language use). IBM's High Level Assembler Toolkit includes such a macro package.
A curious design was A-natural, a "stream-oriented" assembler for 8080/Z80, processors from Whitesmiths Ltd. (developers of the Unix-like Idris operating system, and what was reported to be the first commercial C compiler). The language was classified as an assembler because it worked with raw machine elements such as opcodes, registers, and memory references; but it incorporated an expression syntax to indicate execution order. Parentheses and other special symbols, along with block-oriented structured programming constructs, controlled the sequence of the generated instructions. A-natural was built as the object language of a C compiler, rather than for hand-coding, but its logical syntax won some fans.
There has been little apparent demand for more sophisticated assemblers since the decline of large-scale assembly language development. In spite of that, they are still being developed and applied in cases where resource constraints or peculiarities in the target system's architecture prevent the effective use of higher-level languages.
Assemblers with a strong macro engine allow structured programming via macros, such as the switch macro provided with the Masm32 package (this code is a complete program):
include \masm32\include\masm32rt.inc ; use the Masm32 library
.code
demomain:
REPEAT 20
switch rv(nrandom, 9) ; generate a number between 0 and 8
mov ecx, 7
case 0
print "case 0"
case ecx ; in contrast to most other programming languages,
print "case 7" ; the Masm32 switch allows "variable cases"
case 1 .. 3
.if eax==1
print "case 1"
.elseif eax==2
print "case 2"
.else
print "cases 1 to 3: other"
.endif
case 4, 6, 8
print "cases 4, 6 or 8"
default
mov ebx, 19 ; print 20 stars
.Repeat
print "*"
dec ebx
.Until Sign? ; loop until the sign flag is set
endsw
print chr$(13, 10)
ENDM
exit
end demomain
Use of assembly language
Historical perspective
Assembly languages were not available at the time when the stored-program computer was introduced. Kathleen Booth "is credited with inventing assembly language" based on theoretical work she began in 1947, while working on the ARC2 at Birkbeck, University of London following consultation by Andrew Booth (later her husband) with mathematician John von Neumann and physicist Herman Goldstine at the Institute for Advanced Study.
In late 1948, the Electronic Delay Storage Automatic Calculator (EDSAC) had an assembler (named "initial orders") integrated into its bootstrap program. It used one-letter mnemonics developed by David Wheeler, who is credited by the IEEE Computer Society as the creator of the first "assembler". Reports on the EDSAC introduced the term "assembly" for the process of combining fields into an instruction word. SOAP (Symbolic Optimal Assembly Program) was an assembly language for the IBM 650 computer written by Stan Poley in 1955.
Assembly languages eliminate much of the error-prone, tedious, and time-consuming first-generation programming needed with the earliest computers, freeing programmers from tedium such as remembering numeric codes and calculating addresses.
Assembly languages were once widely used for all sorts of programming. However, by the 1980s (1990s on microcomputers), their use had largely been supplanted by higher-level languages, in the search for improved programming productivity. Today, assembly language is still used for direct hardware manipulation, access to specialized processor instructions, or to address critical performance issues. Typical uses are device drivers, low-level embedded systems, and real-time systems.
Historically, numerous programs have been written entirely in assembly language. The Burroughs MCP (1961) was the first computer for which an operating system was not developed entirely in assembly language; it was written in Executive Systems Problem Oriented Language (ESPOL), an Algol dialect. Many commercial applications were written in assembly language as well, including a large amount of the IBM mainframe software written by large corporations. COBOL, FORTRAN and some PL/I eventually displaced much of this work, although a number of large organizations retained assembly-language application infrastructures well into the 1990s.
Most early microcomputers relied on hand-coded assembly language, including most operating systems and large applications. This was because these systems had severe resource constraints, imposed idiosyncratic memory and display architectures, and provided limited, buggy system services. Perhaps more important was the lack of first-class high-level language compilers suitable for microcomputer use. A psychological factor may have also played a role: the first generation of microcomputer programmers retained a hobbyist, "wires and pliers" attitude.
In a more commercial context, the biggest reasons for using assembly language were minimal bloat (size), minimal overhead, greater speed, and reliability.
Typical examples of large assembly language programs from this time are IBM PC DOS operating systems, the Turbo Pascal compiler and early applications such as the spreadsheet program Lotus 1-2-3. Assembly language was used to get the best performance out of the Sega Saturn, a console that was notoriously challenging to develop and program games for. The 1993 arcade game NBA Jam is another example.
Assembly language has long been the primary development language for many popular home computers of the 1980s and 1990s (such as the MSX, Sinclair ZX Spectrum, Commodore 64, Commodore Amiga, and Atari ST). This was in large part because interpreted BASIC dialects on these systems offered insufficient execution speed, as well as insufficient facilities to take full advantage of the available hardware on these systems. Some systems even have an integrated development environment (IDE) with highly advanced debugging and macro facilities. Some compilers available for the Radio Shack TRS-80 and its successors had the capability to combine inline assembly source with high-level program statements. Upon compilation, a built-in assembler produced inline machine code.
Current usage
There have always been debates over the usefulness and performance of assembly language relative to high-level languages.
Although assembly language has specific niche uses where it is important (see below), there are other tools for optimization.
, the TIOBE index of programming language popularity ranks assembly language at 11, ahead of Visual Basic, for example. Assembler can be used to optimize for speed or optimize for size. In the case of speed optimization, modern optimizing compilers are claimed to render high-level languages into code that can run as fast as hand-written assembly, despite the counter-examples that can be found. The complexity of modern processors and memory sub-systems makes effective optimization increasingly difficult for compilers, as well as for assembly programmers. Moreover, increasing processor performance has meant that most CPUs sit idle most of the time, with delays caused by predictable bottlenecks such as cache misses, I/O operations and paging. This has made raw code execution speed a non-issue for many programmers.
There are some situations in which developers might choose to use assembly language:
Writing code for systems with that have limited high-level language options such as the Atari 2600, Commodore 64, and graphing calculators. Programs for these computers of 1970s and 1980s are often written in the context of demoscene or retrogaming subcultures.
Code that must interact directly with the hardware, for example in device drivers and interrupt handlers.
In an embedded processor or DSP, high-repetition interrupts require the shortest number of cycles per interrupt, such as an interrupt that occurs 1000 or 10000 times a second.
Programs that need to use processor-specific instructions not implemented in a compiler. A common example is the bitwise rotation instruction at the core of many encryption algorithms, as well as querying the parity of a byte or the 4-bit carry of an addition.
A stand-alone executable of compact size is required that must execute without recourse to the run-time components or libraries associated with a high-level language. Examples have included firmware for telephones, automobile fuel and ignition systems, air-conditioning control systems, security systems, and sensors.
Programs with performance-sensitive inner loops, where assembly language provides optimization opportunities that are difficult to achieve in a high-level language. For example, linear algebra with BLAS or discrete cosine transformation (e.g. SIMD assembly version from x264).
Programs that create vectorized functions for programs in higher-level languages such as C. In the higher-level language this is sometimes aided by compiler intrinsic functions which map directly to SIMD mnemonics, but nevertheless result in a one-to-one assembly conversion specific for the given vector processor.
Real-time programs such as simulations, flight navigation systems, and medical equipment. For example, in a fly-by-wire system, telemetry must be interpreted and acted upon within strict time constraints. Such systems must eliminate sources of unpredictable delays, which may be created by (some) interpreted languages, automatic garbage collection, paging operations, or preemptive multitasking. However, some higher-level languages incorporate run-time components and operating system interfaces that can introduce such delays. Choosing assembly or lower level languages for such systems gives programmers greater visibility and control over processing details.
Cryptographic algorithms that must always take strictly the same time to execute, preventing timing attacks.
Modify and extend legacy code written for IBM mainframe computers.
Situations where complete control over the environment is required, in extremely high-security situations where nothing can be taken for granted.
Computer viruses, bootloaders, certain device drivers, or other items very close to the hardware or low-level operating system.
Instruction set simulators for monitoring, tracing and debugging where additional overhead is kept to a minimum.
Situations where no high-level language exists, on a new or specialized processor for which no cross compiler is available.
Reverse-engineering and modifying program files such as:
existing binaries that may or may not have originally been written in a high-level language, for example when trying to recreate programs for which source code is not available or has been lost, or cracking copy protection of proprietary software.
Video games (also termed ROM hacking), which is possible via several methods. The most widely employed method is altering program code at the assembly language level.
Assembly language is still taught in most computer science and electronic engineering programs. Although few programmers today regularly work with assembly language as a tool, the underlying concepts remain important. Such fundamental topics as binary arithmetic, memory allocation, stack processing, character set encoding, interrupt processing, and compiler design would be hard to study in detail without a grasp of how a computer operates at the hardware level. Since a computer's behavior is fundamentally defined by its instruction set, the logical way to learn such concepts is to study an assembly language. Most modern computers have similar instruction sets. Therefore, studying a single assembly language is sufficient to learn: I) the basic concepts; II) to recognize situations where the use of assembly language might be appropriate; and III) to see how efficient executable code can be created from high-level languages.
Typical applications
Assembly language is typically used in a system's boot code, the low-level code that initializes and tests the system hardware prior to booting the operating system and is often stored in ROM. (BIOS on IBM-compatible PC systems and CP/M is an example.)
Assembly language is often used for low-level code, for instance for operating system kernels, which cannot rely on the availability of pre-existing system calls and must indeed implement them for the particular processor architecture on which the system will be running.
Some compilers translate high-level languages into assembly first before fully compiling, allowing the assembly code to be viewed for debugging and optimization purposes.
Some compilers for relatively low-level languages, such as Pascal or C, allow the programmer to embed assembly language directly in the source code (so called inline assembly). Programs using such facilities can then construct abstractions using different assembly language on each hardware platform. The system's portable code can then use these processor-specific components through a uniform interface.
Assembly language is useful in reverse engineering. Many programs are distributed only in machine code form which is straightforward to translate into assembly language by a disassembler, but more difficult to translate into a higher-level language through a decompiler. Tools such as the Interactive Disassembler make extensive use of disassembly for such a purpose. This technique is used by hackers to crack commercial software, and competitors to produce software with similar results from competing companies.
Assembly language is used to enhance speed of execution, especially in early personal computers with limited processing power and RAM.
Assemblers can be used to generate blocks of data, with no high-level language overhead, from formatted and commented source code, to be used by other code.
See also
Compiler
Comparison of assemblers
Disassembler
Hexadecimal
Instruction set architecture
Little man computer – an educational computer model with a base-10 assembly language
Nibble
Typed assembly language
Notes
References
Further reading
(2+xiv+270+6 pages)
Kann, Charles W. (2021). "Introduction to Assembly Language Programming: From Soup to Nuts: ARM Edition"
("An online book full of helpful ASM info, tutorials and code examples" by the ASM Community, archived at the internet archive.)
External links
Unix Assembly Language Programming
Linux Assembly
PPR: Learning Assembly Language
NASM – The Netwide Assembler (a popular assembly language)
Assembly Language Programming Examples
Authoring Windows Applications In Assembly Language
Assembly Optimization Tips by Mark Larson
The table for assembly language to machine code
Assembly language
Computer-related introductions in 1949
Embedded systems
Low-level programming languages
Programming language implementation
Programming languages created in 1949 |
1921 | https://en.wikipedia.org/wiki/Al-Qaeda | Al-Qaeda | Al-Qaeda (; , , translation: "the Base", "the Foundation", alternatively spelled al-Qaida and al-Qa'ida), officially known as Qaedat al-Jihad, is a multinational militant Sunni Islamic extremist network composed of Salafist jihadists. It was founded in 1988 by Osama bin Laden, Abdullah Azzam, and several other Arab volunteers during the Soviet–Afghan War.
Al-Qaeda has been designated as a terrorist group by the United Nations Security Council (the permament members of which are China, France, Russia, the United Kingdom and the United States), the North Atlantic Treaty Organization (NATO), the European Union, India and various other countries (see below). Al-Qaeda has mounted attacks on non-military and military targets in various countries, including the 1998 United States embassy bombings, the September 11 attacks, and the 2002 Bali bombings.
The United States government responded to the September 11 attacks by launching the "war on terror", which sought to undermine al-Qaeda and its allies. The deaths of key leaders, including that of Osama bin Laden, have led al-Qaeda's operations to shift from top-down organization and planning of attacks, to the planning of attacks which are carried out by a loose network of associated groups and lone-wolf operators. Al-Qaeda characteristically organises attacks which include suicide attacks and the simultaneous bombing of several targets. Al-Qaeda ideologues envision the violent removal of all foreign and secular influences in Muslim countries, which it perceives as corrupt deviations.
Al-Qaeda members believe a Christian–Jewish alliance (led by the United States) is conspiring to be at war against Islam and destroy Islam. As Salafist jihadists, members of al-Qaeda believe that killing non-combatants is religiously sanctioned. Al-Qaeda also opposes what it regards as man-made laws, and wants to replace them exclusively with a strict form of sharīʿa (Islamic religious law which is perceived as divine law).
Al-Qaeda has carried out many attacks on people whom it considers kāfir. It is also responsible for instigating sectarian violence among Muslims. Al-Qaeda regards liberal Muslims, Shias, Sufis, and other Islamic sects as heretical and its members and sympathizers have attacked their mosques, shrines, and gatherings. Examples of sectarian attacks include the 2004 Ashoura massacre, the 2006 Sadr City bombings, the April 2007 Baghdad bombings and the 2007 Yazidi community bombings.
Following the death of Osama bin Laden in 2011, the group has been led by Egyptian Ayman al-Zawahiri, and as of 2021 has reportedly suffered from a deterioration of central command over its regional operations.
Organization
Al-Qaeda only indirectly controls its day-to-day operations. Its philosophy calls for the centralization of decision making, while allowing for the decentralization of execution. Al-Qaeda's top leaders have defined the organization's ideology and guiding strategy, and they have also articulated simple and easy-to-receive messages. At the same time, mid-level organizations were given autonomy, but they had to consult with top management before large-scale attacks and assassinations. Top management included the shura council as well as committees on military operations, finance, and information sharing. Through al-Qaeda's information committees, he placed special emphasis on communicating with his groups. However, after the War on Terror, al-Qaeda's leadership has become isolated. As a result, the leadership has become decentralized, and the organization has become regionalized into several al-Qaeda groups.
Many terrorism experts do not believe that the global jihadist movement is driven at every level by al-Qaeda's leadership. However, bin Laden held considerable ideological sway over some Muslim extremists before his death. Experts argue that al-Qaeda has fragmented into a number of disparate regional movements, and that these groups bear little connection with one another.
This view mirrors the account given by Osama bin Laden in his October 2001 interview with Tayseer Allouni:
Bruce Hoffman, however, sees al-Qaeda as a cohesive network that is strongly led from the Pakistani tribal areas.
Affiliates
Al-Qaeda has the following direct affiliates:
Al-Qaeda in the Arabian Peninsula (AQAP)
Al-Qaeda in the Indian Subcontinent (AQIS)
Al-Qaeda in the Islamic Maghreb (AQIM)
al-Shabaab
Jama'at Nasr al-Islam wal Muslimin (JNIM)
Al-Qaeda in Bosnia and Herzegovina
Al-Qaeda in Caucasus and Russia
Al-Qaeda in Gaza
Al-Qaeda in Kurdistan
Al-Qaeda in Lebanon
Al Qaeda in Spain
Al-Qaeda in the Malay Archipelago
Al-Qaeda in the Sinai Peninsula
Guardians of Religion Organization
Al-Qaeda in the Land of the Two Niles (AQTN).
The following are presently believed to be indirect affiliates of al-Qaeda:
Caucasus Emirate (factions)
Fatah al-Islam
Islamic Jihad Union
Islamic Movement of Uzbekistan
Jaish-e-Mohammed
Jemaah Islamiyah
Lashkar-e-Taiba
Moroccan Islamic Combatant Group
Al-Qaeda's former affiliates include the following:
Abu Sayyaf (pledged allegiance to ISIL in 2014)
Al-Mourabitoun (joined JNIM in 2017)
Al-Qaeda in Iraq (became the Islamic State of Iraq, which later seceded from al-Qaeda and became ISIL)
Al-Qaeda in the Lands Beyond the Sahel (inactive since 2015)
Ansar al-Islam (majority merged with ISIL in 2014)
Ansar Dine (joined JNIM in 2017)
Islamic Jihad of Yemen (became AQAP)
Jund al-Aqsa (defunct)
Movement for Oneness and Jihad in West Africa (merged with Al-Mulathameen to form Al-Mourabitoun in 2013)
Rajah Sulaiman movement (defunct)
Al-Nusra Front (became Hayat Tahrir al-Sham and split ties in 2017, disputed)
Ansar Bait al-Maqdis (pledged alliance to ISIL and adopted the name Sinai Province)
Leadership
Osama bin Laden (1988 – May 2011)
Osama bin Laden served as the emir of al-Qaeda from the organization's founding in 1988 until his assassination by US forces on May 1, 2011. Atiyah Abd al-Rahman was alleged to be second in command prior to his death on August 22, 2011.
Bin Laden was advised by a Shura Council, which consists of senior al-Qaeda members. The group was estimated to consist of 20–30 people.
After May 2011
Ayman al-Zawahiri had been al-Qaeda's deputy emir and assumed the role of emir following bin Laden's death. Al-Zawahiri replaced Saif al-Adel, who had served as interim commander.
On June 5, 2012, Pakistani intelligence officials announced that al-Rahman's alleged successor as second in command, Abu Yahya al-Libi, had been killed in Pakistan.
Nasir al-Wuhayshi was alleged to have become al-Qaeda's overall second in command and general manager in 2013. He was concurrently the leader of al-Qaeda in the Arabian Peninsula (AQAP) until he was killed by a US airstrike in Yemen in June 2015. Abu Khayr al-Masri, Wuhayshi's alleged successor as the deputy to Ayman al-Zawahiri, was killed by a US airstrike in Syria in February 2017.
Al-Qaeda's network was built from scratch as a conspiratorial network which drew upon the leadership of a number of regional nodes. The organization divided itself into several committees, which include:
The Military Committee, which is responsible for training operatives, acquiring weapons, and planning attacks.
The Money/Business Committee, which funds the recruitment and training of operatives through the hawala banking system. US-led efforts to eradicate the sources of "terrorist financing" were most successful in the year immediately following the September 11 attacks. Al-Qaeda continues to operate through unregulated banks, such as the 1,000 or so hawaladars in Pakistan, some of which can handle deals of up to million. The committee also procures false passports, pays al-Qaeda members, and oversees profit-driven businesses. In the 9/11 Commission Report, it was estimated that al-Qaeda required $30million per year to conduct its operations.
The Law Committee reviews Sharia law, and decides upon courses of action conform to it.
The Islamic Study/Fatwah Committee issues religious edicts, such as an edict in 1998 telling Muslims to kill Americans.
The Media Committee ran the now-defunct newspaper Nashrat al Akhbar () and handled public relations.
In 2005, al-Qaeda formed As-Sahab, a media production house, to supply its video and audio materials.
Command structure
Most of Al Qaeda's top leaders and operational directors were veterans who fought against the Soviet invasion of Afghanistan in the 1980s. Osama bin Laden and his deputy, Ayman al-Zawahiri, were the leaders who were considered the operational commanders of the organization. Nevertheless, Al-Qaeda is not operationally managed by Ayman al-Zawahiri. Several operational groups exist, which consult with the leadership in situations where attacks are in preparation.
When asked in 2005 about the possibility of al-Qaeda's connection to the July 7, 2005 London bombings, Metropolitan Police Commissioner Sir Ian Blair said: "Al-Qaeda is not an organization. Al-Qaeda is a way of working... but this has the hallmark of that approach... al-Qaeda clearly has the ability to provide training... to provide expertise... and I think that is what has occurred here." On August 13, 2005, The Independent newspaper, reported that the July7 bombers had acted independently of an al-Qaeda mastermind.
Nasser al-Bahri, who was Osama bin Laden's bodyguard for four years in the run-up to 9/11 wrote in his memoir a highly detailed description of how the group functioned at that time. Al-Bahri described al-Qaeda's formal administrative structure and vast arsenal. However, the author Adam Curtis argued that the idea of al-Qaeda as a formal organization is primarily an American invention. Curtis contended the name "al-Qaeda" was first brought to the attention of the public in the 2001 trial of bin Laden and the four men accused of the 1998 US embassy bombings in East Africa. Curtis wrote:
During the 2001 trial, the US Department of Justice needed to show that bin Laden was the leader of a criminal organization in order to charge him in absentia under the Racketeer Influenced and Corrupt Organizations Act. The name of the organization and details of its structure were provided in the testimony of Jamal al-Fadl, who said he was a founding member of the group and a former employee of bin Laden. Questions about the reliability of al-Fadl's testimony have been raised by a number of sources because of his history of dishonesty, and because he was delivering it as part of a plea bargain agreement after being convicted of conspiring to attack US military establishments. Sam Schmidt, a defense attorney who defended al-Fadl said:
Field operatives
The number of individuals in the group who have undergone proper military training, and are capable of commanding insurgent forces, is largely unknown. Documents captured in the raid on bin Laden's compound in 2011 show that the core al-Qaeda membership in 2002 was 170. In 2006, it was estimated that al-Qaeda had several thousand commanders embedded in 40 different countries. , it was believed that no more than 200–300 members were still active commanders.
According to the 2004 BBC documentary The Power of Nightmares, al-Qaeda was so weakly linked together that it was hard to say it existed apart from bin Laden and a small clique of close associates. The lack of any significant numbers of convicted al-Qaeda members, despite a large number of arrests on terrorism charges, was cited by the documentary as a reason to doubt whether a widespread entity that met the description of al-Qaeda existed. Al-Qaeda's commanders, as well as its sleeping agents, are hiding in different parts of the world to this day. They are mainly hunted by the American and Israeli secret services. Al Qaeda's number two leader, Abdullah Ahmed Abdullah, was killed by Israeli agents. His pseudonym was Abu Muhammad al-Masri, who was killed in November 2020 in Iran. He was involved in the 1988 assassination attempt on the US embassies in Kenya and Tanzania.
Insurgent forces
According to author Robert Cassidy, al-Qaeda maintains two separate forces which are deployed alongside insurgents in Iraq and Pakistan. The first, numbering in the tens of thousands, was "organized, trained, and equipped as insurgent combat forces" in the Soviet–Afghan war. The force was composed primarily of foreign mujahideen from Saudi Arabia and Yemen. Many of these fighters went on to fight in Bosnia and Somalia for global jihad. Another group, which numbered 10,000 in 2006, live in the West and have received rudimentary combat training.
Other analysts have described al-Qaeda's rank and file as being "predominantly Arab" in its first years of operation, but that the organization also includes "other peoples" . It has been estimated that 62 percent of al-Qaeda members have a university education. In 2011 and the following year, the Americans successfully settled accounts with Osama bin Laden, Anwar al-Awlaki, the organization's chief propagandist, and Abu Yahya al-Libi's deputy commander. The optimistic voices were already saying it was over for al-Qaeda. Nevertheless, it was around this time that the Arab Spring greeted the region, the turmoil of which came great to al-Qaeda's regional forces. Seven years later, Ayman al-Zawahiri became arguably the number one leader in the organization, implementing his strategy with systematic consistency. Tens of thousands loyal to al-Qaeda and related organizations were able to challenge local and regional stability and ruthlessly attack their enemies in the Middle East, Africa, South Asia, Southeast Asia, Europe and Russia alike. In fact, from Northwest Africa to South Asia, al-Qaeda had more than two dozen “franchise-based” allies. The number of al-Qaeda militants was set at 20,000 in Syria alone, and they had 4,000 members in Yemen and about 7,000 in Somalia. The war was not over.
Financing
Al-Qaeda usually does not disburse funds for attacks, and very rarely makes wire transfers. In the 1990s, financing came partly from the personal wealth of Osama bin Laden. Other sources of income included the heroin trade and donations from supporters in Kuwait, Saudi Arabia and other Islamic Gulf states. A WikiLeaks-released 2009 internal US government cable stated that "terrorist funding emanating from Saudi Arabia remains a serious concern."
Among the first pieces of evidence regarding Saudi Arabia's support for al-Qaeda was the so-called "Golden Chain", a list of early al-Qaeda funders seized during a 2002 raid in Sarajevo by Bosnian police. The hand-written list was validated by al-Qaeda defector Jamal al-Fadl, and included the names of both donors and beneficiaries. Osama bin-Laden's name appeared seven times among the beneficiaries, while 20 Saudi and Gulf-based businessmen and politicians were listed among the donors. Notable donors included Adel Batterjee, and Wael Hamza Julaidan. Batterjee was designated as a terror financier by the US Department of the Treasury in 2004, and Julaidan is recognized as one of al-Qaeda's founders.
Documents seized during the 2002 Bosnia raid showed that al-Qaeda widely exploited charities to channel financial and material support to its operatives across the globe. Notably, this activity exploited the International Islamic Relief Organization (IIRO) and the Muslim World League (MWL). The IIRO had ties with al-Qaeda associates worldwide, including al-Qaeda's deputy Ayman al Zawahiri. Zawahiri's brother worked for the IIRO in Albania and had actively recruited on behalf of al-Qaeda. The MWL was openly identified by al-Qaeda's leader as one of the three charities al-Qaeda primarily relied upon for funding sources.
Allegations of Qatari support
Several Qatari citizens have been accused of funding al-Qaeda. This includes Abd Al-Rahman al-Nuaimi, a Qatari citizen and a human-rights activist who founded the Swiss-based non-governmental organization (NGO) Alkarama. On December 18, 2013, the US Treasury designated Nuaimi as a terrorist for his activities supporting al-Qaeda. The US Treasury has said Nuaimi "has facilitated significant financial support to al-Qaeda in Iraq, and served as an interlocutor between al-Qaeda in Iraq and Qatar-based donors".
Nuaimi was accused of overseeing a $2million monthly transfer to al-Qaeda in Iraq as part of his role as mediator between Iraq-based al-Qaeda senior officers and Qatari citizens. Nuaimi allegedly entertained relationships with Abu-Khalid al-Suri, al-Qaeda's top envoy in Syria, who processed a $600,000 transfer to al-Qaeda in 2013. Nuaimi is also known to be associated with Abd al-Wahhab Muhammad 'Abd al-Rahman al-Humayqani, a Yemeni politician and founding member of Alkarama, who was listed as a Specially Designated Global Terrorist (SDGT) by the US Treasury in 2013. The US authorities claimed that Humayqani exploited his role in Alkarama to fundraise on behalf of al-Qaeda in the Arabian Peninsula (AQAP). A prominent figure in AQAP, Nuaimi was also reported to have facilitated the flow of funding to AQAP affiliates based in Yemen. Nuaimi was also accused of investing funds in the charity directed by Humayqani to ultimately fund AQAP. About ten months after being sanctioned by the US Treasury, Nuaimi was also restrained from doing business in the UK.
Another Qatari citizen, Kalifa Mohammed Turki Subayi, was sanctioned by the US Treasury on June 5, 2008, for his activities as a "Gulf-based al-Qaeda financier". Subayi's name was added to the UN Security Council's Sanctions List in 2008 on charges of providing financial and material support to al-Qaeda senior leadership. Subayi allegedly moved al-Qaeda recruits to South Asia-based training camps. He also financially supported Khalid Sheikh Mohammed, a Pakistani national and senior al-Qaeda officer who is believed to be the mastermind behind the September 11 attack according to the September 11 Commission report.
Qataris provided support to al-Qaeda through the country's largest NGO, the Qatar Charity. Al-Qaeda defector al-Fadl, who was a former member of Qatar Charity, testified in court that Abdullah Mohammed Yusef, who served as Qatar Charity's director, was affiliated to al-Qaeda and simultaneously to the National Islamic Front, a political group that gave al-Qaeda leader Osama Bin Laden harbor in Sudan in the early 1990s.
It was alleged that in 1993 Bin Laden was using Middle East based Sunni charities to channel financial support to al-Qaeda operatives overseas. The same documents also report Bin Laden's complaint that the failed assassination attempt of Egyptian President Hosni Mubarak had compromised the ability of al-Qaeda to exploit charities to support its operatives to the extent it was capable of before 1995.
Qatar financed al-Qaeda's enterprises through al-Qaeda's former affiliate in Syria, Jabhat al-Nusra. The funding was primarily channeled through kidnapping for ransom. The Consortium Against Terrorist Finance (CATF) reported that the Gulf country has funded al-Nusra since 2013. In 2017, Asharq Al-Awsat estimated that Qatar had disbursed $25million in support of al-Nusra through kidnapping for ransom. In addition, Qatar has launched fundraising campaigns on behalf of al-Nusra. Al-Nusra acknowledged a Qatar-sponsored campaign "as one of the preferred conduits for donations intended for the group".
Strategy
In the disagreement over whether Al-Qaeda's objectives are religious or political, Mark Sedgwick describes Al-Qaeda's strategy as political in the immediate term but with ultimate aims that are religious.
On March 11, 2005, Al-Quds Al-Arabi published extracts from Saif al-Adel's document "Al Qaeda's Strategy to the Year 2020". Abdel Bari Atwan summarizes this strategy as comprising five stages to rid the Ummah from all forms of oppression:
Provoke the United States and the West into invading a Muslim country by staging a massive attack or string of attacks on US soil that results in massive civilian casualties.
Incite local resistance to occupying forces.
Expand the conflict to neighboring countries and engage the US and its allies in a long war of attrition.
Convert al-Qaeda into an ideology and set of operating principles that can be loosely franchised in other countries without requiring direct command and control, and via these franchises incite attacks against the US and countries allied with the US until they withdraw from the conflict, as happened with the 2004 Madrid train bombings, but which did not have the same effect with the July 7, 2005 London bombings.
The US economy will finally collapse by the year 2020, under the strain of multiple engagements in numerous places. This will lead to a collapse in the worldwide economic system, and lead to global political instability. This will lead to a global jihad led by al-Qaeda, and a Wahhabi Caliphate will then be installed across the world.
Atwan noted that, while the plan is unrealistic, "it is sobering to consider that this virtually describes the downfall of the Soviet Union."
According to Fouad Hussein, a Jordanian journalist and author who has spent time in prison with Al-Zarqawi, Al Qaeda's strategy consists of seven phases and is similar to the plan described in Al Qaeda's Strategy to the year 2020. These phases include:
"The Awakening." This phase was supposed to last from 2001 to 2003. The goal of the phase is to provoke the United States to attack a Muslim country by executing an attack that kills many civilians on US soil.
"Opening Eyes." This phase was supposed to last from 2003 to 2006. The goal of this phase was to recruit young men to the cause and to transform the al-Qaeda group into a movement. Iraq was supposed to become the center of all operations with financial and military support for bases in other states.
"Arising and Standing up", was supposed to last from 2007 to 2010. In this phase, al-Qaeda wanted to execute additional attacks and focus their attention on Syria. Hussein believed other countries in the Arabian Peninsula were also in danger.
Al-Qaeda expected a steady growth among their ranks and territories due to the declining power of the regimes in the Arabian Peninsula. The main focus of attack in this phase was supposed to be on oil suppliers and cyberterrorism, targeting the US economy and military infrastructure.
The declaration of an Islamic Caliphate, which was projected between 2013 and 2016. In this phase, al-Qaeda expected the resistance from Israel to be heavily reduced.
The declaration of an "Islamic Army" and a "fight between believers and non-believers", also called "total confrontation".
"Definitive Victory", projected to be completed by 2020.
According to the seven-phase strategy, the war is projected to last less than two years.
According to Charles Lister of the Middle East Institute and Katherine Zimmerman of the American Enterprise Institute, the new model of al-Qaeda is to "socialize communities" and build a broad territorial base of operations with the support of local communities, also gaining income independent of the funding of sheiks.
Name
The English name of the organization is a simplified transliteration of the Arabic noun (), which means "the foundation" or "the base". The initial al- is the Arabic definite article "the", hence "the base".
In Arabic, al-Qaeda has four syllables (). However, since two of the Arabic consonants in the name are not phones found in the English language, the common naturalized English pronunciations include , and . Al-Qaeda's name can also be transliterated as al-Qaida, al-Qa'ida, or el-Qaida.
Bin Laden explained the origin of the term in a videotaped interview with Al Jazeera journalist Tayseer Alouni in October 2001:
It has been argued that two documents seized from the Sarajevo office of the Benevolence International Foundation prove the name was not simply adopted by the mujahideen movement and that a group called al-Qaeda was established in August 1988. Both of these documents contain minutes of meetings held to establish a new military group, and contain the term "al-Qaeda".
Former British Foreign Secretary Robin Cook wrote that the word al-Qaeda should be translated as "the database", because it originally referred to the computer file of the thousands of mujahideen militants who were recruited and trained with CIA help to defeat the Russians. In April 2002, the group assumed the name Qa'idat al-Jihad ( ), which means "the base of Jihad". According to Diaa Rashwan, this was "apparently as a result of the merger of the overseas branch of Egypt's al-Jihad, which was led by Ayman al-Zawahiri, with the groups Bin Laden brought under his control after his return to Afghanistan in the mid-1990s."
Ideology
The radical Islamist movement developed during the Islamic revival and the rise of the Islamist movement after the Iranian Revolution (1978-1979).
Some have argued that the writings of Islamic author and thinker Sayyid Qutb inspired the al-Qaeda organization. In the 1950s and 1960s, Qutb preached that because of the lack of sharia law, the Muslim world was no longer Muslim, and had reverted to the pre-Islamic ignorance known as jahiliyyah. To restore Islam, Qutb argued that a vanguard of righteous Muslims was needed in order to establish "true Islamic states", implement sharia, and rid the Muslim world of any non-Muslim influences. In Qutb's view, the enemies of Islam included "world Jewry", which "plotted conspiracies" and opposed Islam.
In the words of Mohammed Jamal Khalifa, a close college friend of bin Laden:
Qutb also influenced Ayman al-Zawahiri. Zawahiri's uncle and maternal family patriarch, Mafouz Azzam, was Qutb's student, protégé, personal lawyer, and an executor of his estate. Azzam was one of the last people to see Qutb alive before his execution. Zawahiri paid homage to Qutb in his work Knights under the Prophet's Banner.
Qutb argued that many Muslims were not true Muslims. Some Muslims, Qutb argued, were apostates. These alleged apostates included leaders of Muslim countries, since they failed to enforce sharia law. The Afghan jihad against the pro-Soviet government further developed the Salafist Jihadist movement which inspired Al-Qaeda.
Theory of Islamic State
Al Qaeda aims to establish an Islamic State in the Arab World, modelled after the Rashidun Caliphate, by initiating a global Jihad against the "International Jewish-Crusader Alliance" led by the United States, which it sees as the "external enemy" and against the secular governments in Muslim countries, that are described as "the apostate domestic enemy". Once foreign influences and the secular ruling authorities are removed from Muslim countries through Jihad; Al Qaeda supports elections to choose the rulers of its proposed Islamic states. This is to be done through representatives of leadership councils (Shura) that would ensure the implementation of Shari'a (Islamic law). However, it opposes elections that institute parliaments which empower Muslim and non-Muslim legislators to collaborate in making laws of their own choosing. In the second edition of his book Knights Under the Banner of the Prophet, Ayman Al Zawahiri writes:"We demand... the government of the rightly guiding caliphate, which is established on the basis of the sovereignty of sharia and not on the whims of the majority. Its ummah chooses its rulers....If they deviate, the ummah brings them to account and removes them. The ummah participates in producing that government's decisions and determining its direction. ... [The caliphal state] commands the right and forbids the wrong and engages in jihad to liberate Muslim lands and to free all humanity from all oppression and ignorance."
Religious compatibility
Abdel Bari Atwan wrote that:
Attacks on civilians
Following its 9/11 attack and in response to its condemnation by Islamic scholars, Al-Qaeda provided a justification for the killing of non-combatants/civilians, entitled, "A Statement from Qaidat al-Jihad Regarding the Mandates of the Heroes and the Legality of the Operations in New York and Washington". According to a couple of critics, Quintan Wiktorowicz and John Kaltner, it provides "ample theological justification for killing civilians in almost any imaginable situation."
Among these justifications are that America is leading the west in waging a War on Islam so that attacks on America are a defense of Islam and any treaties and agreements between Muslim majority states and Western countries that would be violated by attacks are null and void. According to the tract, several conditions allow for the killing of civilians including:
retaliation for the American war on Islam which al-Qaeda alleges has targeted "Muslim women, children and elderly";
when it is too difficult to distinguish between non-combatants and combatants when attacking an enemy "stronghold" (hist) and/or non-combatants remain in enemy territory, killing them is allowed;
those who assist the enemy "in deed, word, mind" are eligible for killing, and this includes the general population in democratic countries because civilians can vote in elections that bring enemies of Islam to power;
the necessity of killing in the war to protect Islam and Muslims;
the prophet Muhammad, when asked whether the Muslim fighters could use the catapult against the village of Taif, replied affirmatively, even though the enemy fighters were mixed with a civilian population;
if the women, children and other protected groups serve as human shields for the enemy;
if the enemy has broken a treaty, killing of civilians is permitted.
History
The Guardian in 2009 described five distinct phases in the development of al-Qaeda: its beginnings in the late 1980s, a "wilderness" period in 1990–1996, its "heyday" in 1996–2001, a network period from 2001 to 2005, and a period of fragmentation from 2005 to 2009.
Jihad in Afghanistan
The origins of al-Qaeda can be traced to the Soviet War in Afghanistan (December 1979February 1989). The United States viewed the conflict in Afghanistan in terms of the Cold War, with Marxists on one side and the native Afghan mujahideen on the other. This view led to a CIA program called Operation Cyclone, which channeled funds through Pakistan's Inter-Services Intelligence agency to the Afghan Mujahideen. The US government provided substantial financial support to the Afghan Islamic militants. Aid to Gulbuddin Hekmatyar, an Afghan mujahideen leader and founder of the Hezb-e Islami, amounted to more than $600million. In addition to American aid, Hekmatyar was the recipient of Saudi aid. In the early 1990s, after the US had withdrawn support, Hekmatyar "worked closely" with bin Laden.
At the same time, a growing number of Arab mujahideen joined the jihad against the Afghan Marxist regime, which was facilitated by international Muslim organizations, particularly the Maktab al-Khidamat (MAK). In 1984, MAK was established in Peshawar, Pakistan, by bin Laden and Abdullah Yusuf Azzam, a Palestinian Islamic scholar and member of the Muslim Brotherhood. MAK organized guest houses in Peshawar, near the Afghan border, and gathered supplies for the construction of paramilitary training camps to prepare foreign recruits for the Afghan war front. MAK was funded by the Saudi government as well as by individual Muslims including Saudi businessmen. Bin Laden also became a major financier of the mujahideen, spending his own money and using his connections to influence public opinion about the war.
From 1986, MAK began to set up a network of recruiting offices in the US, the hub of which was the Al Kifah Refugee Center at the Farouq Mosque on Brooklyn's Atlantic Avenue. Among notable figures at the Brooklyn center were "double agent" Ali Mohamed, whom FBI special agent Jack Cloonan called "bin Laden's first trainer", and "Blind Sheikh" Omar Abdel-Rahman, a leading recruiter of mujahideen for Afghanistan. Azzam and bin Laden began to establish camps in Afghanistan in 1987.
MAK and foreign mujahideen volunteers, or "Afghan Arabs", did not play a major role in the war. While over 250,000 Afghan mujahideen fought the Soviets and the communist Afghan government, it is estimated that there were never more than two thousand foreign mujahideen on the field at any one time. Nonetheless, foreign mujahideen volunteers came from 43 countries, and the total number who participated in the Afghan movement between 1982 and 1992 is reported to have been 35,000. Bin Laden played a central role in organizing training camps for the foreign Muslim volunteers.
The Soviet Union withdrew from Afghanistan in 1989. Mohammad Najibullah's Communist Afghan government lasted for three more years, before it was overrun by elements of the mujahideen.
Expanding operations
Toward the end of the Soviet military mission in Afghanistan, some foreign mujahideen wanted to expand their operations to include Islamist struggles in other parts of the world, such as Palestine and Kashmir. A number of overlapping and interrelated organizations were formed, to further those aspirations. One of these was the organization that would eventually be called al-Qaeda.
Research suggests that al-Qaeda was formed on August 11, 1988, when a meeting in Afghanistan between leaders of Egyptian Islamic Jihad, Abdullah Azzam, and bin Laden took place. An agreement was reached to link bin Laden's money with the expertise of the Islamic Jihad organization and take up the jihadist cause elsewhere after the Soviets withdrew from Afghanistan.
Notes indicate al-Qaeda was a formal group by August 20, 1988. A list of requirements for membership itemized the following: listening ability, good manners, obedience, and making a pledge (bayat ) to follow one's superiors. In his memoir, bin Laden's former bodyguard, Nasser al-Bahri, gives the only publicly available description of the ritual of giving bayat when he swore his allegiance to the al-Qaeda chief. According to Wright, the group's real name was not used in public pronouncements because "its existence was still a closely held secret."
After Azzam was assassinated in 1989 and MAK broke up, significant numbers of MAK followers joined bin Laden's new organization.
In November 1989, Ali Mohamed, a former special forces sergeant stationed at Fort Bragg, North Carolina, left military service and moved to California. He traveled to Afghanistan and Pakistan and became "deeply involved with bin Laden's plans." In 1991, Ali Mohammed is said to have helped orchestrate bin Laden's relocation to Sudan.
Gulf War and the start of US enmity
Following the Soviet Union's withdrawal from Afghanistan in February 1989, bin Laden returned to Saudi Arabia. The Iraqi invasion of Kuwait in August 1990 had put the Kingdom and its ruling House of Saud at risk. The world's most valuable oil fields were within striking distance of Iraqi forces in Kuwait, and Saddam's call to Pan-Arabism could potentially rally internal dissent.
In the face of a seemingly massive Iraqi military presence, Saudi Arabia's own forces were outnumbered. Bin Laden offered the services of his mujahideen to King Fahd to protect Saudi Arabia from the Iraqi army. The Saudi monarch refused bin Laden's offer, opting instead to allow US and allied forces to deploy troops into Saudi territory.
The deployment angered bin Laden, as he believed the presence of foreign troops in the "land of the two mosques" (Mecca and Medina) profaned sacred soil. After speaking publicly against the Saudi government for harboring American troops, he was banished and forced to live in exile in Sudan.
Sudan
From around 1992 to 1996, al-Qaeda and bin Laden based themselves in Sudan at the invitation of Islamist theoretician Hassan al-Turabi. The move followed an Islamist coup d'état in Sudan, led by Colonel Omar al-Bashir, who professed a commitment to reordering Muslim political values. During this time, bin Laden assisted the Sudanese government, bought or set up various business enterprises, and established training camps.
A key turning point for bin Laden occurred in 1993 when Saudi Arabia gave support for the Oslo Accords, which set a path for peace between Israel and Palestinians. Due to bin Laden's continuous verbal assault on King Fahd of Saudi Arabia, Fahd sent an emissary to Sudan on March 5, 1994, demanding bin Laden's passport. Bin Laden's Saudi citizenship was also revoked. His family was persuaded to cut off his stipend, $7million a year, and his Saudi assets were frozen. His family publicly disowned him. There is controversy as to what extent bin Laden continued to garner support from members afterwards.
In 1993, a young schoolgirl was killed in an unsuccessful attempt on the life of the Egyptian prime minister, Atef Sedki. Egyptian public opinion turned against Islamist bombings, and the police arrested 280 of al-Jihad's members and executed 6. In June 1995, an attempt to assassinate Egyptian president Mubarak led to the expulsion of Egyptian Islamic Jihad (EIJ), and in May 1996, of bin Laden from Sudan.
According to Pakistani-American businessman Mansoor Ijaz, the Sudanese government offered the Clinton Administration numerous opportunities to arrest bin Laden. Ijaz's claims appeared in numerous op-ed pieces, including one in the Los Angeles Times and one in The Washington Post co-written with former Ambassador to Sudan Timothy M. Carney. Similar allegations have been made by Vanity Fair contributing editor David Rose, and Richard Miniter, author of Losing bin Laden, in a November 2003 interview with World.
Several sources dispute Ijaz's claim, including the 9/11 Commission, which concluded in part:
Refuge in Afghanistan
After the fall of the Afghan communist regime in 1992, Afghanistan was effectively ungoverned for four years and plagued by constant infighting between various mujahideen groups. This situation allowed the Taliban to organize. The Taliban also garnered support from graduates of Islamic schools, which are called madrassa. According to Ahmed Rashid, five leaders of the Taliban were graduates of Darul Uloom Haqqania, a madrassa in the small town of Akora Khattak. The town is situated near Peshawar in Pakistan, but the school is largely attended by Afghan refugees. This institution reflected Salafi beliefs in its teachings, and much of its funding came from private donations from wealthy Arabs. Four of the Taliban's leaders attended a similarly funded and influenced madrassa in Kandahar. Bin Laden's contacts were laundering donations to these schools, and Islamic banks were used to transfer money to an "array" of charities which served as front groups for al-Qaeda.
Many of the mujahideen who later joined the Taliban fought alongside Afghan warlord Mohammad Nabi Mohammadi's Harkat i Inqilabi group at the time of the Russian invasion. This group also enjoyed the loyalty of most Afghan Arab fighters.
The continuing lawlessness enabled the growing and well-disciplined Taliban to expand their control over territory in Afghanistan, and it came to establish an enclave which it called the Islamic Emirate of Afghanistan. In 1994, it captured the regional center of Kandahar, and after making rapid territorial gains thereafter, the Taliban captured the capital city Kabul in September 1996.
In 1996, Taliban-controlled Afghanistan provided a perfect staging ground for al-Qaeda. While not officially working together, Al-Qaeda enjoyed the Taliban's protection and supported the regime in such a strong symbiotic relationship that many Western observers dubbed the Taliban's Islamic Emirate of Afghanistan as, "the world's first terrorist-sponsored state." However, at this time, only Pakistan, Saudi Arabia, and the United Arab Emirates recognized the Taliban as the legitimate government of Afghanistan.
In response to the 1998 United States embassy bombings, an al-Qaeda base in Khost Province was attacked by the United States during Operation Infinite Reach.
While in Afghanistan, the Taliban government tasked al-Qaeda with the training of Brigade 055, an elite element of the Taliban's army. The Brigade mostly consisted of foreign fighters, veterans from the Soviet Invasion, and adherents to the ideology of the mujahideen. In November 2001, as Operation Enduring Freedom had toppled the Taliban government, many Brigade 055 fighters were captured or killed, and those who survived were thought to have escaped into Pakistan along with bin Laden.
By the end of 2008, some sources reported that the Taliban had severed any remaining ties with al-Qaeda, however, there is reason to doubt this. According to senior US military intelligence officials, there were fewer than 100 members of al-Qaeda remaining in Afghanistan in 2009.
Al Qaeda chief, Asim Omar was killed in Afghanistan's Musa Qala district after a joint US–Afghanistan commando airstrike on September 23, Afghan's National Directorate of Security (NDS) confirmed in October 2019.
In a report released May 27, 2020, the United Nations' Analytical Support and Sanctions Monitoring Team stated that the Taliban-Al Qaeda relations remain strong to this day and additionally, Al Qaeda itself has admitted that it operates inside Afghanistan.
On July 26, 2020, a United Nations report stated that the Al Qaeda group is still active in twelve provinces in Afghanistan and its leader al-Zawahiri is still based in the country. and that the UN Monitoring Team estimated that the total number of Al Qaeda fighters in Afghanistan were "between 400 and 600".
Call for global Salafi jihadism
In 1994, the Salafi groups waging Salafi jihadism in Bosnia entered into decline, and groups such as the Egyptian Islamic Jihad began to drift away from the Salafi cause in Europe. Al-Qaeda stepped in and assumed control of around 80% of non-state armed cells in Bosnia in late 1995. At the same time, al-Qaeda ideologues instructed the network's recruiters to look for Jihadi international Muslims who believed that extremist-jihad must be fought on a global level. Al-Qaeda also sought to open the "offensive phase" of the global Salafi jihad. Bosnian Islamists in 2006 called for "solidarity with Islamic causes around the world", supporting the insurgents in Kashmir and Iraq as well as the groups fighting for a Palestinian state.
Fatwas
In 1996, al-Qaeda announced its jihad to expel foreign troops and interests from what they considered Islamic lands. Bin Laden issued a fatwa, which amounted to a public declaration of war against the US and its allies, and began to refocus al-Qaeda's resources on large-scale, propagandist strikes.
On February 23, 1998, bin Laden and Ayman al-Zawahiri, a leader of Egyptian Islamic Jihad, along with three other Islamist leaders, co-signed and issued a fatwa calling on Muslims to kill Americans and their allies. Under the banner of the World Islamic Front for Combat Against the Jews and Crusaders, they declared:
Neither bin Laden nor al-Zawahiri possessed the traditional Islamic scholarly qualifications to issue a fatwa. However, they rejected the authority of the contemporary ulema (which they saw as the paid servants of jahiliyya rulers), and took it upon themselves.
Iraq
Al-Qaeda has launched attacks against the Iraqi Shia majority in an attempt to incite sectarian violence. Al-Zarqawi purportedly declared an all-out war on Shiites while claiming responsibility for Shiite mosque bombings. The same month, a statement claiming to be from Al-Qaeda in Iraq was rejected as a "fake". In a December 2007 video, al-Zawahiri defended the Islamic State in Iraq, but distanced himself from the attacks against civilians, which he deemed to be perpetrated by "hypocrites and traitors existing among the ranks".
US and Iraqi officials accused Al-Qaeda in Iraq of trying to slide Iraq into a full-scale civil war between Iraq's Shiite population and Sunni Arabs. This was done through an orchestrated campaign of civilian massacres and a number of provocative attacks against high-profile religious targets. With attacks including the 2003 Imam Ali Mosque bombing, the 2004 Day of Ashura and Karbala and Najaf bombings, the 2006 first al-Askari Mosque bombing in Samarra, the deadly single-day series of bombings in which at least 215 people were killed in Baghdad's Shiite district of Sadr City, and the second al-Askari bombing in 2007, Al-Qaeda in Iraq provoked Shiite militias to unleash a wave of retaliatory attacks, resulting in death squad-style killings and further sectarian violence which escalated in 2006. In 2008, sectarian bombings blamed on al-Qaeda in Iraq killed at least 42 people at the Imam Husayn Shrine in Karbala in March, and at least 51 people at a bus stop in Baghdad in June.
In February 2014, after a prolonged dispute with al-Qaeda in Iraq's successor organisation, the Islamic State of Iraq and the Levant (ISIS), al-Qaeda publicly announced it was cutting all ties with the group, reportedly for its brutality and "notorious intractability".
Somalia and Yemen
In Somalia, al-Qaeda agents had been collaborating closely with its Somali wing, which was created from the al-Shabaab group. In February 2012, al-Shabaab officially joined al-Qaeda, declaring loyalty in a video. Somalian al-Qaeda recruited children for suicide-bomber training and recruited young people to participate in militant actions against Americans.
The percentage of attacks in the First World originating from the Afghanistan–Pakistan (AfPak) border declined starting in 2007, as al-Qaeda shifted to Somalia and Yemen. While al-Qaeda leaders were hiding in the tribal areas along the AfPak border, middle-tier leaders heightened activity in Somalia and Yemen.
In January 2009, al-Qaeda's division in Saudi Arabia merged with its Yemeni wing to form al-Qaeda in the Arabian Peninsula (AQAP). Centered in Yemen, the group takes advantage of the country's poor economy, demography and domestic security. In August 2009, the group made an assassination attempt against a member of the Saudi royal family. President Obama asked Ali Abdullah Saleh to ensure closer cooperation with the US in the struggle against the growing activity of al-Qaeda in Yemen, and promised to send additional aid. The wars in Iraq and Afghanistan drew US attention from Somalia and Yemen. In December 2011, US Secretary of Defense Leon Panetta said the US operations against al-Qaeda "are now concentrating on key groups in Yemen, Somalia and North Africa." Al-Qaeda in the Arabian Peninsula claimed responsibility for the 2009 bombing attack on Northwest Airlines Flight 253 by Umar Farouk Abdulmutallab. The AQAP declared the Al-Qaeda Emirate in Yemen on March 31, 2011, after capturing the most of the Abyan Governorate.
As the Saudi-led military intervention in Yemen escalated in July 2015, fifty civilians had been killed and twenty million needed aid. In February 2016, al-Qaeda forces and Saudi Arabian-led coalition forces were both seen fighting Houthi rebels in the same battle. In August 2018, Al Jazeera reported that "A military coalition battling Houthi rebels secured secret deals with al-Qaeda in Yemen and recruited hundreds of the group's fighters.... Key figures in the deal-making said the United States was aware of the arrangements and held off on drone attacks against the armed group, which was created by Osama bin Laden in 1988."
United States operations
In December 1998, the Director of the CIA Counterterrorism Center reported to President Bill Clinton that al-Qaeda was preparing to launch attacks in the United States, and the group was training personnel to hijack aircraft. On September 11, 2001, al-Qaeda attacked the United States, hijacking four airliners within the country and deliberately crashing two into the twin towers of the World Trade Center in New York City. The third plane crashed into the western side of the Pentagon in Arlington County, Virginia. The fourth plane was crashed into a field in Shanksville, Pennsylvania. In total, the attackers killed 2,977 victims and injured more than 6,000 others.
US officials noted that Anwar al-Awlaki had considerable reach within the US. A former FBI agent identified Awlaki as a known "senior recruiter for al-Qaeda", and a spiritual motivator. Awlaki's sermons in the US were attended by three of the 9/11 hijackers, and accused Fort Hood shooter Nidal Hasan. US intelligence intercepted emails from Hasan to Awlaki between December 2008 and early 2009. On his website, Awlaki has praised Hasan's actions in the Fort Hood shooting.
An unnamed official claimed there was good reason to believe Awlaki "has been involved in very serious terrorist activities since leaving the US [in 2002], including plotting attacks against America and our allies." US President Barack Obama approved the targeted killing of al-Awlaki by April 2010, making al-Awlaki the first US citizen ever placed on the CIA target list. That required the consent of the US National Security Council, and officials argued that the attack was appropriate because the individual posed an imminent danger to national security. In May 2010, Faisal Shahzad, who pleaded guilty to the 2010 Times Square car bombing attempt, told interrogators he was "inspired by" al-Awlaki, and sources said Shahzad had made contact with al-Awlaki over the Internet. Representative Jane Harman called him "terrorist number one", and Investor's Business Daily called him "the world's most dangerous man". In July 2010, the US Treasury Department added him to its list of Specially Designated Global Terrorists, and the UN added him to its list of individuals associated with al-Qaeda. In August 2010, al-Awlaki's father initiated a lawsuit against the US government with the American Civil Liberties Union, challenging its order to kill al-Awlaki. In October 2010, US and UK officials linked al-Awlaki to the 2010 cargo plane bomb plot. In September 2011, al-Awlaki was killed in a targeted killing drone attack in Yemen. On March 16, 2012, it was reported that Osama bin Laden plotted to kill US President Barack Obama.
Killing of Osama bin Laden
On May 1, 2011, US President Barack Obama announced that Osama bin Laden had been killed by "a small team of Americans" acting under direct orders, in a covert operation in Abbottabad, Pakistan. The action took place north of Islamabad. According to US officials, a team of 20–25 US Navy SEALs under the command of the Joint Special Operations Command stormed bin Laden's compound with two helicopters. Bin Laden and those with him were killed during a firefight in which US forces experienced no casualties. According to one US official the attack was carried out without the knowledge or consent of the Pakistani authorities. In Pakistan some people were reported to be shocked at the unauthorized incursion by US armed forces. The site is a few miles from the Pakistan Military Academy in Kakul. In his broadcast announcement President Obama said that US forces "took care to avoid civilian casualties".
Details soon emerged that three men and a woman were killed along with bin Laden, the woman being killed when she was "used as a shield by a male combatant". DNA from bin Laden's body, compared with DNA samples on record from his dead sister, confirmed bin Laden's identity. The body was recovered by the US military and was in its custody until, according to one US official, his body was buried at sea according to Islamic traditions. One US official said that "finding a country willing to accept the remains of the world's most wanted terrorist would have been difficult." US State Department issued a "Worldwide caution" for Americans following bin Laden's death and US diplomatic facilities everywhere were placed on high alert, a senior US official said. Crowds gathered outside the White House and in New York City's Times Square to celebrate bin Laden's death.
Syria
In 2003, President Bashar al-Assad revealed in an interview with a Kuwaiti newspaper that he doubted al-Qaeda even existed. He was quoted as saying, "Is there really an entity called al-Qaeda? Was it in Afghanistan? Does it exist now?" He went on further to remark about bin Laden, commenting "[he] cannot talk on the phone or use the Internet, but he can direct communications to the four corners of the world? This is illogical."
Following the mass protests that took place in 2011, which demanded the resignation of al-Assad, al-Qaeda-affiliated groups and Sunni sympathizers soon began to constitute an effective fighting force against al-Assad. Before the Syrian Civil War, al-Qaeda's presence in Syria was negligible, but its growth thereafter was rapid. Groups such as the al-Nusra Front and the Islamic State of Iraq and the Levant have recruited many foreign Mujahideen to train and fight in what has gradually become a highly sectarian war. Ideologically, the Syrian Civil War has served the interests of al-Qaeda as it pits a mainly Sunni opposition against a secular government. Al-Qaeda and other fundamentalist Sunni militant groups have invested heavily in the civil conflict, at times actively backing and supporting the mainstream Syrian Opposition.
On February 2, 2014, al-Qaeda distanced itself from ISIS and its actions in Syria; however, during 2014–15, ISIS and the al-Qaeda-linked al-Nusra Front were still able to occasionally cooperate in their fight against the Syrian government. Al-Nusra (backed by Saudi Arabia and Turkey as part of the Army of Conquest during 2015–2017) launched many attacks and bombings, mostly against targets affiliated with or supportive of the Syrian government. From October 2015, Russian air strikes targeted positions held by al-Nusra Front, as well as other Islamist and non-Islamist rebels, while the US also targeted al-Nusra with airstrikes. In early 2016, a leading ISIL ideologue described al-Qaeda as the "Jews of jihad".
India
In September 2014, al-Zawahiri announced al-Qaeda was establishing a front in India to "wage jihad against its enemies, to liberate its land, to restore its sovereignty, and to revive its Caliphate." Al-Zawahiri nominated India as a beachhead for regional jihad taking in neighboring countries such as Myanmar and Bangladesh. The motivation for the video was questioned, as it appeared the militant group was struggling to remain relevant in light of the emerging prominence of ISIS. The new wing was to be known as "Qaedat al-Jihad fi'shibhi al-qarrat al-Hindiya" or al-Qaida in the Indian Subcontinent (AQIS). Leaders of several Indian Muslim organizations rejected al-Zawahiri's pronouncement, saying they could see no good coming from it, and viewed it as a threat to Muslim youth in the country.
In 2014, Zee News reported that Bruce Riedel, a former CIA analyst and National Security Council official for South Asia, had accused the Pakistani military intelligence and Inter-Services Intelligence (ISI) of organising and assisting Al-Qaeda to organise in India, that Pakistan ought to be warned that it will be placed on the list of State Sponsors of Terrorism, and that "Zawahiri made the tape in his hideout in Pakistan, no doubt, and many Indians suspect the ISI is helping to protect him."
In September 2021, after the success of 2021 Taliban offensive, al-Qaeda congratulated Taliban and called for liberation of Kashmir from the “clutches of the enemies of Islam".
Attacks
Al-Qaeda has carried out a total of six major attacks, four of them in its jihad against America. In each case the leadership planned the attack years in advance, arranging for the shipment of weapons and explosives and using its businesses to provide operatives with safehouses and false identities.
1991
To prevent the former Afghan king Mohammed Zahir Shah from coming back from exile and possibly becoming head of a new government, bin Laden instructed a Portuguese convert to Islam, Paulo Jose de Almeida Santos, to assassinate Zahir Shah. On November 4, 1991, Santos entered the king’s villa in Rome posing as a journalist and tried to stab him with a dagger. A tin of cigarillos in the king’s breast pocket deflected the blade and saved Zahir Shah’s life. Santos was apprehended and jailed for 10 years in Italy.
1992
On December 29, 1992, al-Qaeda launched the 1992 Yemen hotel bombings. Two bombs were detonated in Aden, Yemen. The first target was the Movenpick Hotel and the second was the parking lot of the Goldmohur Hotel.
The bombings were an attempt to eliminate American soldiers on their way to Somalia to take part in the international famine relief effort, Operation Restore Hope. Internally, al-Qaeda considered the bombing a victory that frightened the Americans away, but in the US, the attack was barely noticed. No American soldiers were killed because no soldiers were staying in the hotel which was bombed. However, an Australian tourist and a Yemeni hotel worker were killed in the bombing. Seven others, mostly Yemenis, were severely injured. Two fatwas are said to have been appointed by al-Qaeda's members, Mamdouh Mahmud Salim, to justify the killings according to Islamic law. Salim referred to a famous fatwa appointed by Ibn Taymiyyah, a 13th-century scholar much admired by Wahhabis, which sanctioned resistance by any means during the Mongol invasions.
Late 1990s
In 1996, bin Laden personally engineered a plot to assassinate United States President Bill Clinton while the president was in Manila for the Asia-Pacific Economic Cooperation. However, intelligence agents intercepted a message before the motorcade was to leave, and alerted the US Secret Service. Agents later discovered a bomb planted under a bridge.
On August 7, 1998, al-Qaeda bombed the US embassies in East Africa, killing 224 people, including 12 Americans. In retaliation, a barrage of cruise missiles launched by the US military devastated an al-Qaeda base in Khost, Afghanistan. The network's capacity was unharmed. In late 1999 and 2000, Al-Qaeda planned attacks to coincide with the millennium, masterminded by Abu Zubaydah and involving Abu Qatada, which would include the bombing of Christian holy sites in Jordan, the bombing of Los Angeles International Airport by Ahmed Ressam, and the bombing of the .
On October 12, 2000, al-Qaeda militants in Yemen bombed the missile destroyer USS Cole in a suicide attack, killing 17 US servicemen and damaging the vessel while it lay offshore. Inspired by the success of such a brazen attack, al-Qaeda's command core began to prepare for an attack on the US itself.
September 11 attacks
The September 11 attacks on America by al-Qaeda killed 2,977 people2,507 civilians, 343 firefighters, 72 law enforcement officers, and 55 military personnel. Two commercial airliners were deliberately flown into the twin towers of the World Trade Center, a third into the Pentagon, and a fourth, originally intended to target either the United States Capitol or the White House, crashed in a field in Stonycreek Township near Shanksville, Pennsylvania. It was also the deadliest foreign attack on American soil since the Japanese attack on Pearl Harbor on December 7, 1941.
The attacks were conducted by al-Qaeda, acting in accord with the 1998 fatwa issued against the US and its allies by persons under the command of bin Laden, al-Zawahiri, and others. Evidence points to suicide squads led by al-Qaeda military commander Mohamed Atta as the culprits of the attacks, with bin Laden, Ayman al-Zawahiri, Khalid Sheikh Mohammed, and Hambali as the key planners and part of the political and military command.
Messages issued by bin Laden after September 11, 2001, praised the attacks, and explained their motivation while denying any involvement. Bin Laden legitimized the attacks by identifying grievances felt by both mainstream and Islamist Muslims, such as the general perception that the US was actively oppressing Muslims.
Bin Laden asserted that America was massacring Muslims in "Palestine, Chechnya, Kashmir and Iraq" and Muslims should retain the "right to attack in reprisal". He also claimed the 9/11 attacks were not targeted at people, but "America's icons of military and economic power", despite the fact he planned to attack in the morning when most of the people in the intended targets were present and thus generating the maximum number of human casualties.
Evidence later came to light that the original targets for the attack may have been nuclear power stations on the US East Coast. The targets were later altered by al-Qaeda, as it was feared that such an attack "might get out of hand".
Designation as a terrorist group
Al-Qaeda is deemed a designated terrorist group by the following countries and international organizations:
designated Al-Qaeda's Turkish branch
United Nations Security Council
War on terror
In the immediate aftermath of the 9/11 attacks, the US government responded, and began to prepare its armed forces to overthrow the Taliban, which it believed was harboring al-Qaeda. The US offered Taliban leader Mullah Omar a chance to surrender bin Laden and his top associates. The first forces to be inserted into Afghanistan were paramilitary officers from the CIA's elite Special Activities Division (SAD).
The Taliban offered to turn over bin Laden to a neutral country for trial if the US would provide evidence of bin Laden's complicity in the attacks. US President George W. Bush responded by saying: "We know he's guilty. Turn him over", and British Prime Minister Tony Blair warned the Taliban regime: "Surrender bin Laden, or surrender power."
Soon thereafter the US and its allies invaded Afghanistan, and together with the Afghan Northern Alliance removed the Taliban government as part of the war in Afghanistan. As a result of the US special forces and air support for the Northern Alliance ground forces, a number of Taliban and al-Qaeda training camps were destroyed, and much of the operating structure of al-Qaeda is believed to have been disrupted. After being driven from their key positions in the Tora Bora area of Afghanistan, many al-Qaeda fighters tried to regroup in the rugged Gardez region of the nation.
By early 2002, al-Qaeda had been dealt a serious blow to its operational capacity, and the Afghan invasion appeared to be a success. Nevertheless, a significant Taliban insurgency remained in Afghanistan.
Debate continued regarding the nature of al-Qaeda's role in the 9/11 attacks. The US State Department released a videotape showing bin Laden speaking with a small group of associates somewhere in Afghanistan shortly before the Taliban was removed from power. Although its authenticity has been questioned by a couple of people, the tape definitively implicates bin Laden and al-Qaeda in the September 11 attacks. The tape was aired on many television channels, with an accompanying English translation provided by the US Defense Department.
In September 2004, the 9/11 Commission officially concluded that the attacks were conceived and implemented by al-Qaeda operatives. In October 2004, bin Laden appeared to claim responsibility for the attacks in a videotape released through Al Jazeera, saying he was inspired by Israeli attacks on high-rises in the 1982 invasion of Lebanon: "As I looked at those demolished towers in Lebanon, it entered my mind that we should punish the oppressor in kind and that we should destroy towers in America in order that they taste some of what we tasted and so that they be deterred from killing our women and children."
By the end of 2004, the US government proclaimed that two-thirds of the most senior al-Qaeda figures from 2001 had been captured and interrogated by the CIA: Abu Zubaydah, Ramzi bin al-Shibh and Abd al-Rahim al-Nashiri in 2002; Khalid Sheikh Mohammed in 2003; and Saif al Islam el Masry in 2004. Mohammed Atef and several others were killed. The West was criticized for not being able to handle Al-Qaida despite a decade of the war.
Activities
Africa
Al-Qaeda involvement in Africa has included a number of bombing attacks in North Africa, while supporting parties in civil wars in Eritrea and Somalia. From 1991 to 1996, bin Laden and other al-Qaeda leaders were based in Sudan.
Islamist rebels in the Sahara calling themselves al-Qaeda in the Islamic Maghreb have stepped up their violence in recent years. French officials say the rebels have no real links to the al-Qaeda leadership, but this has been disputed. It seems likely that bin Laden approved the group's name in late 2006, and the rebels "took on the al Qaeda franchise label", almost a year before the violence began to escalate.
In Mali, the Ansar Dine faction was also reported as an ally of al-Qaeda in 2013. The Ansar al Dine faction aligned themselves with the AQIM.
In 2011, Al-Qaeda's North African wing condemned Libyan leader Muammar Gaddafi and declared support for the Anti-Gaddafi rebels.
Following the Libyan Civil War, the removal of Gaddafi and the ensuing period of post-civil war violence in Libya, various Islamist militant groups affiliated with al-Qaeda were able to expand their operations in the region. The 2012 Benghazi attack, which resulted in the death of US Ambassador J. Christopher Stevens and three other Americans, is suspected of having been carried out by various Jihadist networks, such as Al-Qaeda in the Islamic Maghreb, Ansar al-Sharia and several other Al-Qaeda affiliated groups. The capture of Nazih Abdul-Hamed al-Ruqai, a senior al-Qaeda operative wanted by the United States for his involvement in the 1998 United States embassy bombings, on October 5, 2013, by US Navy Seals, FBI and CIA agents illustrates the importance the US and other Western allies have placed on North Africa.
Europe
Prior to the September 11 attacks, al-Qaeda was present in Bosnia and Herzegovina, and its members were mostly veterans of the El Mudžahid detachment of the Bosnian Muslim Army of the Republic of Bosnia and Herzegovina. Three al-Qaeda operatives carried out the Mostar car bombing in 1997. The operatives were closely linked to and financed by the Saudi High Commission for Relief of Bosnia and Herzegovina founded by then-prince King Salman of Saudi Arabia.
Before the 9/11 attacks and the US invasion of Afghanistan, westerners who had been recruits at al-Qaeda training camps were sought after by al-Qaeda's military wing. Language skills and knowledge of Western culture were generally found among recruits from Europe, such was the case with Mohamed Atta, an Egyptian national studying in Germany at the time of his training, and other members of the Hamburg Cell. Osama bin Laden and Mohammed Atef would later designate Atta as the ringleader of the 9/11 hijackers. Following the attacks, Western intelligence agencies determined that al-Qaeda cells operating in Europe had aided the hijackers with financing and communications with the central leadership based in Afghanistan.
In 2003, Islamists carried out a series of bombings in Istanbul killing fifty-seven people and injuring seven hundred. Seventy-four people were charged by the Turkish authorities. Some had previously met bin Laden, and though they specifically declined to pledge allegiance to al-Qaeda they asked for its blessing and help.
In 2009, three Londoners, Tanvir Hussain, Assad Sarwar and Ahmed Abdullah Ali, were convicted of conspiring to detonate bombs disguised as soft drinks on seven airplanes bound for Canada and the US The MI5 investigation regarding the plot involved more than a year of surveillance work conducted by over two hundred officers. British and US officials said the plotunlike many similar homegrown European Islamic militant plotswas directly linked to al-Qaeda and guided by senior al-Qaeda members in Pakistan.
In 2012, Russian Intelligence indicated that al-Qaeda had given a call for "forest jihad" and has been starting massive forest fires as part of a strategy of "thousand cuts".
Arab world
Following Yemeni unification in 1990, Wahhabi networks began moving missionaries into the country. Although it is unlikely bin Laden or Saudi al-Qaeda were directly involved, the personal connections they made would be established over the next decade and used in the USS Cole bombing. Concerns grew over Al Qaeda's group in Yemen.
In Iraq, al-Qaeda forces loosely associated with the leadership were embedded in the Jama'at al-Tawhid wal-Jihad group commanded by Abu Musab al-Zarqawi. Specializing in suicide operations, they have been a "key driver" of the Sunni insurgency. Although they played a small part in the overall insurgency, between 30% and 42% of all suicide bombings which took place in the early years were claimed by Zarqawi's group. Reports have indicated that oversights such as the failure to control access to the Qa'qaa munitions factory in Yusufiyah have allowed large quantities of munitions to fall into the hands of al-Qaida. In November 2010, the militant group Islamic State of Iraq, which is linked to al-Qaeda in Iraq, threatened to "exterminate all Iraqi Christians".
Al-Qaeda did not begin training Palestinians until the late 1990s. Large groups such as Hamas and Palestinian Islamic Jihad have rejected an alliance with al-Qaeda, fearing that al-Qaeda will co-opt their cells. This may have changed recently. The Israeli security and intelligence services believe al-Qaeda has managed to infiltrate operatives from the Occupied Territories into Israel, and is waiting for an opportunity to attack.
, Saudi Arabia, Qatar and Turkey are openly supporting the Army of Conquest, an umbrella rebel group fighting in the Syrian Civil War against the Syrian government that reportedly includes an al-Qaeda linked al-Nusra Front and another Salafi coalition known as Ahrar al-Sham.
Kashmir
Bin Laden and Ayman al-Zawahiri consider India to be a part of an alleged Crusader-Zionist-Hindu conspiracy against the Islamic world. According to a 2005 report by the Congressional Research Service, bin Laden was involved in training militants for Jihad in Kashmir while living in Sudan in the early 1990s. By 2001, Kashmiri militant group Harkat-ul-Mujahideen had become a part of the al-Qaeda coalition. According to the United Nations High Commissioner for Refugees (UNHCR), al-Qaeda was thought to have established bases in Pakistan administered Kashmir (in Azad Kashmir, and to some extent in Gilgit–Baltistan) during the 1999 Kargil War and continued to operate there with tacit approval of Pakistan's Intelligence services.
Many of the militants active in Kashmir were trained in the same madrasahs as Taliban and al-Qaeda. Fazlur Rehman Khalil of Kashmiri militant group Harkat-ul-Mujahideen was a signatory of al-Qaeda's 1998 declaration of Jihad against America and its allies. In a 'Letter to American People' (2002), bin Laden wrote that one of the reasons he was fighting America was because of its support to India on the Kashmir issue. In November 2001, Kathmandu airport went on high alert after threats that bin Laden planned to hijack a plane and crash it into a target in New Delhi. In 2002, US Secretary of Defense Donald Rumsfeld, on a trip to Delhi, suggested that al-Qaeda was active in Kashmir though he did not have any evidence. Rumsfeld proposed hi-tech ground sensors along the Line of Control to prevent militants from infiltrating into Indian-administered Kashmir.
An investigation in 2002 found evidence that al-Qaeda and its affiliates were prospering in Pakistan-administered Kashmir with tacit approval of Pakistan's Inter-Services Intelligence. In 2002, a special team of Special Air Service and Delta Force was sent into Indian-Administered Kashmir to hunt for bin Laden after receiving reports that he was being sheltered by Kashmiri militant group Harkat-ul-Mujahideen, which had been responsible for kidnapping western tourists in Kashmir in 1995. Britain's highest-ranking al-Qaeda operative Rangzieb Ahmed had previously fought in Kashmir with the group Harkat-ul-Mujahideen and spent time in Indian prison after being captured in Kashmir.
US officials believe al-Qaeda was helping organize attacks in Kashmir in order to provoke conflict between India and Pakistan. Their strategy was to force Pakistan to move its troops to the border with India, thereby relieving pressure on al-Qaeda elements hiding in northwestern Pakistan. In 2006 al-Qaeda claimed they had established a wing in Kashmir. However Indian Army General H. S. Panag argued that the army had ruled out the presence of al-Qaeda in Indian-administered Jammu and Kashmir. Panag also said al-Qaeda had strong ties with Kashmiri militant groups Lashkar-e-Taiba and Jaish-e-Mohammed based in Pakistan. It has been noted that Waziristan has become a battlefield for Kashmiri militants fighting NATO in support of al-Qaeda and Taliban. Dhiren Barot, who wrote the Army of Madinah in Kashmir and was an al-Qaeda operative convicted for involvement in the 2004 financial buildings plot, had received training in weapons and explosives at a militant training camp in Kashmir.
Maulana Masood Azhar, the founder of Kashmiri group Jaish-e-Mohammed, is believed to have met bin Laden several times and received funding from him. In 2002, Jaish-e-Mohammed organized the kidnapping and murder of Daniel Pearl in an operation run in conjunction with al-Qaeda and funded by bin Laden. According to American counter-terrorism expert Bruce Riedel, al-Qaeda and Taliban were closely involved in the 1999 hijacking of Indian Airlines Flight 814 to Kandahar which led to the release of Maulana Masood Azhar and Ahmed Omar Saeed Sheikh from an Indian prison. This hijacking, Riedel said, was rightly described by then Indian Foreign Minister Jaswant Singh as a 'dress rehearsal' for September 11 attacks. Bin Laden personally welcomed Azhar and threw a lavish party in his honor after his release. Ahmed Omar Saeed Sheikh, who had been in prison for his role in the 1994 kidnappings of Western tourists in India, went on to murder Daniel Pearl and was sentenced to death in Pakistan. Al-Qaeda operative Rashid Rauf, who was one of the accused in 2006 transatlantic aircraft plot, was related to Maulana Masood Azhar by marriage.
Lashkar-e-Taiba, a Kashmiri militant group which is thought to be behind 2008 Mumbai attacks, is also known to have strong ties to senior al-Qaeda leaders living in Pakistan. In late 2002, top al-Qaeda operative Abu Zubaydah was arrested while being sheltered by Lashkar-e-Taiba in a safe house in Faisalabad. The FBI believes al-Qaeda and Lashkar have been 'intertwined' for a long time while the CIA has said that al-Qaeda funds Lashkar-e-Taiba. Jean-Louis Bruguière told Reuters in 2009 that "Lashkar-e-Taiba is no longer a Pakistani movement with only a Kashmir political or military agenda. Lashkar-e-Taiba is a member of al-Qaeda."
In a video released in 2008, American-born senior al-Qaeda operative Adam Yahiye Gadahn said that "victory in Kashmir has been delayed for years; it is the liberation of the jihad there from this interference which, Allah willing, will be the first step towards victory over the Hindu occupiers of that Islam land."
In September 2009, a US drone strike reportedly killed Ilyas Kashmiri who was the chief of Harkat-ul-Jihad al-Islami, a Kashmiri militant group associated with al-Qaeda. Kashmiri was described by Bruce Riedel as a 'prominent' al-Qaeda member while others have described him as head of military operations for al-Qaeda. Kashmiri was also charged by the US in a plot against Jyllands-Posten, the Danish newspaper which was at the center of Jyllands-Posten Muhammad cartoons controversy. US officials also believe that Kashmiri was involved in the Camp Chapman attack against the CIA. In January 2010, Indian authorities notified Britain of an al-Qaeda plot to hijack an Indian airlines or Air India plane and crash it into a British city. This information was uncovered from interrogation of Amjad Khwaja, an operative of Harkat-ul-Jihad al-Islami, who had been arrested in India.
In January 2010, US Defense secretary Robert Gates, while on a visit to Pakistan, said that al-Qaeda was seeking to destabilize the region and planning to provoke a nuclear war between India and Pakistan.
Internet
Al-Qaeda and its successors have migrated online to escape detection in an atmosphere of increased international vigilance. The group's use of the Internet has grown more sophisticated, with online activities that include financing, recruitment, networking, mobilization, publicity, and information dissemination, gathering and sharing.
Abu Ayyub al-Masri's al-Qaeda movement in Iraq regularly releases short videos glorifying the activity of jihadist suicide bombers. In addition, both before and after the death of Abu Musab al-Zarqawi (the former leader of al-Qaeda in Iraq), the umbrella organization to which al-Qaeda in Iraq belongs, the Mujahideen Shura Council, has a regular presence on the Web.
The range of multimedia content includes guerrilla training clips, stills of victims about to be murdered, testimonials of suicide bombers, and videos that show participation in jihad through stylized portraits of mosques and musical scores. A website associated with al-Qaeda posted a video of captured American entrepreneur Nick Berg being decapitated in Iraq. Other decapitation videos and pictures, including those of Paul Johnson, Kim Sun-il, and Daniel Pearl, were first posted on jihadist websites.
In December 2004 an audio message claiming to be from bin Laden was posted directly to a website, rather than sending a copy to al Jazeera as he had done in the past. Al-Qaeda turned to the Internet for release of its videos in order to be certain they would be available unedited, rather than risk the possibility of al Jazeera editing out anything critical of the Saudi royal family.
Alneda.com and Jehad.net were perhaps the most significant al-Qaeda websites. Alneda was initially taken down by American Jon Messner, but the operators resisted by shifting the site to various servers and strategically shifting content.
The US government charged a British information technology specialist, Babar Ahmad, with terrorist offences related to his operating a network of English-language al-Qaeda websites, such as Azzam.com. He was convicted and sentenced to 12-and-a-half years in prison.
Online communications
In 2007, al-Qaeda released Mujahedeen Secrets, encryption software used for online and cellular communications. A later version, Mujahideen Secrets 2, was released in 2008.
Aviation network
Al-Qaeda is believed to be operating a clandestine aviation network including "several Boeing 727 aircraft", turboprops and executive jets, according to a 2010 Reuters story. Based on a US Department of Homeland Security report, the story said al-Qaeda is possibly using aircraft to transport drugs and weapons from South America to various unstable countries in West Africa. A Boeing 727 can carry up to ten tons of cargo. The drugs eventually are smuggled to Europe for distribution and sale, and the weapons are used in conflicts in Africa and possibly elsewhere. Gunmen with links to al-Qaeda have been increasingly kidnapping Europeans for ransom. The profits from the drug and weapon sales, and kidnappings can, in turn, fund more militant activities.
Involvement in military conflicts
The following is a list of military conflicts in which Al-Qaeda and its direct affiliates have taken part militarily.
Alleged CIA involvement
Experts debate the notion al-Qaeda attacks were an indirect result from the American CIA's Operation Cyclone program to help the Afghan mujahideen. Robin Cook, British Foreign Secretary from 1997 to 2001, has written that al-Qaeda and bin Laden were "a product of a monumental miscalculation by western security agencies", and that "Al-Qaida, literally 'the database', was originally the computer file of the thousands of mujahideen who were recruited and trained with help from the CIA to defeat the Russians."
Munir Akram, Permanent Representative of Pakistan to the United Nations from 2002 to 2008, wrote in a letter published in The New York Times on January 19, 2008:
CNN journalist Peter Bergen, Pakistani ISI Brigadier Mohammad Yousaf, and CIA operatives involved in the Afghan program, such as Vincent Cannistraro, deny that the CIA or other American officials had contact with the foreign mujahideen or bin Laden, or that the they armed, trained, coached or indoctrinated them. In his 2004 book Ghost Wars, Steve Coll writes that the CIA had contemplated providing direct support to the foreign mujahideen, but that the idea never moved beyond discussions.
Bergen and others argue that there was no need to recruit foreigners unfamiliar with the local language, customs or lay of the land since there were a quarter of a million local Afghans willing to fight. Bergen further argues that foreign mujahideen had no need for American funds since they received several million dollars per year from internal sources. Lastly, he argues that Americans could not have trained the foreign mujahideen because Pakistani officials would not allow more than a handful of them to operate in Pakistan and none in Afghanistan, and the Afghan Arabs were almost invariably militant Islamists reflexively hostile to Westerners whether or not the Westerners were helping the Muslim Afghans.
According to Bergen, who conducted the first television interview with bin Laden in 1997: the idea that "the CIA funded bin Laden or trained bin Laden... [is] a folk myth. There's no evidence of this... Bin Laden had his own money, he was anti-American and he was operating secretly and independently... The real story here is the CIA didn't really have a clue about who this guy was until 1996 when they set up a unit to really start tracking him."
Jason Burke also wrote:
Broader influence
Anders Behring Breivik, the perpetrator of the 2011 Norway attacks, was inspired by Al-Qaeda, calling it "the most successful revolutionary movement in the world." While admitting different aims, he sought to "create a European version of Al-Qaida."
The appropriate response to offshoots is a subject of debate. A journalist reported in 2012 that a senior US military planner had asked: "Should we resort to drones and Special Operations raids every time some group raises the black banner of al Qaeda? How long can we continue to chase offshoots of offshoots around the world?"
Criticism
Islamic extremism dates back to the early history of Islam with the emergence of the Kharijites in the 7th century CE. From their essentially political position, the Kharijites developed extreme doctrines that set them apart from both mainstream Sunni and Shiʿa Muslims. The original schism between Kharijites, Sunnis, and Shiʿas among Muslims was disputed over the political and religious succession to the guidance of the Muslim community (Ummah) after the death of the Islamic prophet Muhammad. Shiʿas believe Ali ibn Abi Talib is the true successor to Muhammad, while Sunnis consider Abu Bakr to hold that position. The Kharijites broke away from both the Shiʿas and the Sunnis during the First Fitna (the first Islamic Civil War); they were particularly noted for adopting a radical approach to takfīr (excommunication), whereby they declared both Sunni and Shiʿa Muslims to be either infidels (kuffār) or false Muslims (munāfiḳūn), and therefore deemed them worthy of death for their perceived apostasy (ridda).
According to a number of sources, a "wave of revulsion" has been expressed against al-Qaeda and its affiliates by "religious scholars, former fighters and militants" who are alarmed by al-Qaeda's takfir and its killing of Muslims in Muslim countries, especially in Iraq.
Noman Benotman, a former militant member of the Libyan Islamic Fighting Group (LIFG), went public with an open letter of criticism to Ayman al-Zawahiri in November 2007, after persuading the imprisoned senior leaders of his former group to enter into peace negotiations with the Libyan regime. While Ayman al-Zawahiri announced the affiliation of the group with al-Qaeda in November 2007, the Libyan government released 90 members of the group from prison several months after "they were said to have renounced violence."
In 2007, on the anniversary of the September 11 attacks, the Saudi sheikh Salman al-Ouda delivered a personal rebuke to bin Laden. Al-Ouda, a religious scholar and one of the fathers of the Sahwa, the fundamentalist awakening movement that swept through Saudi Arabia in the 1980s, is a widely respected critic of jihadism. Al-Ouda addressed al-Qaeda's leader on television asking him:
According to Pew polls, support for al-Qaeda had dropped in the Muslim world in the years before 2008. Support of suicide bombings in Indonesia, Lebanon, and Bangladesh, dropped by half or more in the last five years. In Saudi Arabia, only ten percent had a favorable view of al-Qaeda, according to a December 2017 poll by Terror Free Tomorrow, a Washington-based think tank.
In 2007, the imprisoned Sayyed Imam Al-Sharif, an influential Afghan Arab, "ideological godfather of al-Qaeda", and former supporter of takfir, withdrew his support from al-Qaeda with a book Wathiqat Tarshid Al-'Aml Al-Jihadi fi Misr w'Al-'Alam ().
Although once associated with al-Qaeda, in September 2009 LIFG completed a new "code" for jihad, a 417-page religious document entitled "Corrective Studies". Given its credibility and the fact that several other prominent Jihadists in the Middle East have turned against al-Qaeda, the LIFG's reversal may be an important step toward staunching al-Qaeda's recruitment.
Other criticisms
Bilal Abdul Kareem, an American journalist based in Syria created a documentary about al-Shabab, al-Qaeda's affiliate in Somalia. The documentary included interviews with former members of the group who stated their reasons for leaving al-Shabab. The members made accusations of segregation, lack of religious awareness and internal corruption and favoritism. In response to Kareem, the Global Islamic Media Front condemned Kareem, called him a liar, and denied the accusations from the former fighters.
In mid-2014 after the Islamic State of Iraq and the Levant declared that they had restored the Caliphate, an audio statement was released by the then-spokesman of the group Abu Muhammad al-Adnani claiming that "the legality of all emirates, groups, states, and organizations, becomes null by the expansion of the Caliphate's authority." The speech included a religious refutation of Al-Qaeda for being too lenient regarding Shiites and their refusal to recognize the authority Abu Bakr al-Baghdadi, al-Adnani specifically noting: "It is not suitable for a state to give allegiance to an organization." He also recalled a past instance in which Osama bin Laden called on al-Qaeda members and supporters to give allegiance to Abu Omar al-Baghdadi when the group was still solely operating in Iraq, as the Islamic State of Iraq, and condemned Ayman al-Zawahiri for not making this same claim for Abu Bakr al-Baghdadi. Zawahiri was encouraging factionalism and division between former allies of ISIL such as the al-Nusra Front.
See also
Al-Qaeda involvement in Asia
Al Qaeda Network Exord
Allegations of support system in Pakistan for Osama bin Laden
Belligerents in the Syrian civil war
Bin Laden Issue Station (former CIA unit for tracking bin Laden)
Steven Emerson
Fatawā of Osama bin Laden
International propagation of Salafism and Wahhabism (by region)
Iran - Alleged Al-Qaeda ties
Islamic Military Counter Terrorism Coalition
Operation Cannonball
Psychological warfare
Religious terrorism
Takfir wal-Hijra
Videos and audio recordings of Osama bin Laden
Violent extremism
Publications
Al Qaeda Handbook
Management of Savagery
References
Sources
Bibliography
Reviews
Government reports
Alt URL
External links
Al-Qaeda in Oxford Islamic Studies Online
Al-Qaeda, Counter Extremism Project profile
17 de-classified documents captured during the Abbottabad raid and released to the Combating Terrorism Center
Media
Peter Taylor. (2007). "War on the West". Age of Terror, No. 4, series 1. BBC.
Investigating Al-Qaeda, BBC News
"Al Qaeda's New Front" from PBS Frontline, January 2005
1988 establishments in Asia
Anti-communism in Afghanistan
Anti-communist organizations
Anti-communist terrorism
Anti-government factions of the Syrian civil war
Antisemitism in Pakistan
Antisemitism in the Arab world
Antisemitism in the Middle East
Anti-Shi'ism
Anti-Zionism in the Arab world
Anti-Zionism in the Middle East
Islam and antisemitism
Islamic fundamentalism in the United States
Islamic fundamentalism
Islamist groups
Islam-related controversies
Jihadist groups in Afghanistan
Jihadist groups in Algeria
Jihadist groups in Bangladesh
Jihadist groups in Egypt
Jihadist groups in India
Jihadist groups in Iraq
Jihadist groups in Pakistan
Jihadist groups in Syria
Jihadist groups
Organisations designated as terrorist by Australia
Organisations designated as terrorist by India
Organisations designated as terrorist by Iran
Organisations designated as terrorist by Japan
Organisations designated as terrorist by Pakistan
Organisations designated as terrorist by the United Kingdom
Organizations designated as terrorist by Bahrain
Organizations designated as terrorist by Canada
Organizations designated as terrorist by China
Organizations designated as terrorist by Israel
Organizations designated as terrorist by Malaysia
Organizations designated as terrorist by Paraguay
Organizations designated as terrorist by Russia
Organizations designated as terrorist by Saudi Arabia
Organizations designated as terrorist by the United Arab Emirates
Organizations designated as terrorist by the United States
Organizations designated as terrorist by Turkey
Organizations designated as terrorist in Asia
Organizations established in 1988
Pan-Islamism
Rebel groups in Afghanistan
Rebel groups in Iraq
Rebel groups in Yemen
Al-Qaeda
Salafi Jihadism |
1955 | https://en.wikipedia.org/wiki/Adobe%20Inc. | Adobe Inc. | Adobe Inc. ( ), originally called Adobe Systems Incorporated, is an American multinational computer software company incorporated in Delaware
and headquartered in San Jose, California. It has historically specialized in software for the creation and publication of a wide range of content, including graphics, photography, illustration, animation, multimedia/video, motion pictures, and print. Its flagship products include Adobe Photoshop image editing software; Adobe Illustrator vector-based illustration software; Adobe Acrobat Reader and the Portable Document Format (PDF); and a host of tools primarily for audio-visual content creation, editing and publishing. Adobe offered a bundled solution of its products named Adobe Creative Suite, which evolved into a subscription software as a service (SaaS) offering named Adobe Creative Cloud. The company also expanded into digital marketing software and in 2021 was considered one of the top global leaders in Customer Experience Management (CXM).
Adobe was founded in December 1982 by John Warnock and Charles Geschke, who established the company after leaving Xerox PARC to develop and sell the PostScript page description language. In 1985, Apple Computer licensed PostScript for use in its LaserWriter printers, which helped spark the desktop publishing revolution. Adobe later developed animation and multimedia through its acquisition of Macromedia, from which it acquired Adobe Flash; video editing and compositing software with Adobe Premiere, later known as Adobe Premiere Pro; low-code web development with Adobe Muse; and a suite of software for digital marketing management.
As of 2021, Adobe has more than 24,000 employees worldwide. Adobe also has major development operations in the United States in Newton, New York City, Minneapolis, Lehi, Seattle, Austin and San Francisco. It also has major development operations in Noida and Bangalore in India.
History
The company was started in John Warnock's garage. The name of the company, Adobe, comes from Adobe Creek in Los Altos, California, which ran behind Warnock's house. That creek is so named because of the type of clay found there, which alludes to the creative nature of the company's software. Adobe's corporate logo features a stylized "A" and was designed by Marva Warnock, a graphic designer who is John Warnock's wife. In 2020, the company updated its visual identity, including updating its logo to a single color, an all-red logo that is warmer and more contemporary.
Steve Jobs attempted to buy the company for $5 million in 1982, but Warnock and Geschke refused. Their investors urged them to work something out with Jobs, so they agreed to sell him shares worth 19 percent of the company. Jobs paid a five-times multiple of their company's valuation at the time, plus a five-year license fee for PostScript, in advance. The purchase and advance made Adobe the first company in the history of Silicon Valley to become profitable in its first year.
Warnock and Geschke considered various business options including a copy-service business and a turnkey system for office printing. Then they chose to focus on developing specialized printing software and created the Adobe PostScript page description language.
PostScript was the first truly international standard for computer printing as it included algorithms describing the letter-forms of many languages. Adobe added kanji printer products in 1988. Warnock and Geschke were also able to bolster the credibility of PostScript by connecting with a typesetting manufacturer. They weren't able to work with Compugraphic, but then worked with Linotype to license the Helvetica and Times Roman fonts (through the Linotron 100). By 1987, PostScript had become the industry-standard printer language with more than 400 third-party software programs and licensing agreements with 19 printer companies.
Warnock described the language as "extensible", in its ability to apply graphic arts standards to office printing.
Adobe's first products after PostScript were digital fonts, which they released in a proprietary format called Type 1, worked on by Bill Paxton after he left Stanford. Apple subsequently developed a competing standard, TrueType, which provided full scalability and precise control of the pixel pattern created by the font's outlines, and licensed it to Microsoft.
In the mid-1980s, Adobe entered the consumer software market with Illustrator, a vector-based drawing program for the Apple Macintosh. Illustrator, which grew from the firm's in-house font-development software, helped popularize PostScript-enabled laser printers.
Adobe entered the NASDAQ Composite index in August 1986. Its revenue has grown from roughly $1 billion in 1999 to $4 billion in 2012. Adobe's fiscal years run from December to November. For example, the 2020 fiscal year ended on November 27, 2020.
In 1989, Adobe introduced what was to become its flagship product, a graphics editing program for the Macintosh called Photoshop. Stable and full-featured, Photoshop 1.0 was ably marketed by Adobe and soon dominated the market.
In 1993, Adobe introduced PDF, the Portable Document Format, and its Adobe Acrobat and Reader software. PDF is now an International Standard: ISO 32000-1:2008.
In December 1991, Adobe released Adobe Premiere, which Adobe rebranded as Adobe Premiere Pro in 2003. In 1992, Adobe acquired OCR Systems, Inc. In 1994, Adobe acquired the Aldus Corporation and added PageMaker and After Effects to its product line later in the year; it also controls the TIFF file format. In the same year, Adobe acquired LaserTools Corp and Compution Inc. In 1995, Adobe added FrameMaker, the long-document DTP application, to its product line after Adobe acquired Frame Technology Corp. In 1996, Adobe acquired Ares Software Corp. In 2002, Adobe acquired Canadian company Accelio (also known as JetForm).
In May 2003 Adobe purchased audio editing and multitrack recording software Cool Edit Pro from Syntrillium Software for $16.5 million, as well as a large loop library called "Loopology". Adobe then renamed Cool Edit Pro to "Adobe Audition" and included it in the Creative Suite.
On December 3, 2005, Adobe acquired its main rival, Macromedia, in a stock swap valued at about $3.4 billion, adding ColdFusion, Contribute, Captivate, Breeze (rebranded as Adobe Connect), Director, Dreamweaver, Fireworks, Flash, FlashPaper, Flex, FreeHand, HomeSite, JRun, Presenter, and Authorware to Adobe's product line.
Adobe released Adobe Media Player in April 2008. On April 27, Adobe discontinued development and sales of its older HTML/web development software, GoLive, in favor of Dreamweaver. Adobe offered a discount on Dreamweaver for GoLive users and supports those who still use GoLive with online tutorials and migration assistance. On June 1, Adobe launched Acrobat.com, a series of web applications geared for collaborative work. Creative Suite 4, which includes Design, Web, Production Premium, and Master Collection came out in October 2008 in six configurations at prices from about US$1,700 to $2,500 or by individual application. The Windows version of Photoshop includes 64-bit processing. On December 3, 2008, Adobe laid off 600 of its employees (8% of the worldwide staff) citing the weak economic environment.
On September 15, 2009, Adobe Systems announced that it would acquire online marketing and web analytics company Omniture for $1.8 billion. The deal was completed on October 23, 2009. Former Omniture products were integrated into the Adobe Marketing Cloud.
On November 10, 2009, the company laid off a further 680 employees.
Adobe's 2010 was marked by continuing front-and-back arguments with Apple over the latter's non-support for Adobe Flash on its iPhone, iPad and other products. Former Apple CEO Steve Jobs claimed that Flash was not reliable or secure enough, while Adobe executives have argued that Apple wish to maintain control over the iOS platform. In April 2010, Steve Jobs published a post titled "Thoughts on Flash" where he outlined his thoughts on Flash and the rise of HTML 5.
In July 2010, Adobe bought Day Software integrating their line of CQ Products: WCM, DAM, SOCO, and Mobile
In January 2011, Adobe acquired DemDex, Inc. with the intent of adding DemDex's audience-optimization software to its online marketing suite. At Photoshop World 2011, Adobe unveiled a new mobile photo service. Carousel is a new application for iPhone, iPad, and Mac that uses Photoshop Lightroom technology for users to adjust and fine-tune images on all platforms. Carousel will also allow users to automatically sync, share and browse photos. The service was later renamed to "Adobe Revel".
In October 2011, Adobe acquired Nitobi Software, the makers of the mobile application development framework PhoneGap. As part of the acquisition, the source code of PhoneGap was submitted to the Apache Foundation, where it became Apache Cordova.
In November 2011, Adobe announced that they would cease development of Flash for mobile devices following version 11.1. Instead, it would focus on HTML 5 for mobile devices. In December 2011, Adobe announced that it entered into a definitive agreement to acquire privately held Efficient Frontier.
In December 2012, Adobe opened a new corporate campus in Lehi, Utah.
In 2013, Adobe endured a major security breach. Vast portions of the source code for the company's software were stolen and posted online and over 150 million records of Adobe's customers have been made readily available for download. In 2012, about 40 million sets of payment card information were compromised by a hack of Adobe.
A class-action lawsuit alleging that the company suppressed employee compensation was filed against Adobe, and three other Silicon Valley-based companies in a California federal district court in 2013. In May 2014, it was revealed the four companies, Adobe, Apple, Google, and Intel had reached agreement with the plaintiffs, 64,000 employees of the four companies, to pay a sum of $324.5 million to settle the suit.
In March 2018, at Adobe Summit, the company and NVIDIA publicized a key association to quickly upgrade their industry-driving AI and profound learning innovations. Expanding on years of coordinated effort, the organizations will work to streamline the Adobe Sensei AI and machine learning structure for NVIDIA GPUs. The joint effort will speed time to showcase and enhance the execution of new Sensei-powered services for Adobe Creative Cloud and Experience Cloud clients and engineers.
Adobe and NVIDIA have co-operated for over 10 years on empowering GPU quickening for a wide arrangement of Adobe's creative and computerized encounter items. This incorporates Sensei-powered features, for example, auto lip-sync in Adobe Character Animator CC and face-aware editing in Photoshop CC, and also cloud-based AI/ML items and features, for example, picture investigation for Adobe Stock and Lightroom CC and auto-labeling in Adobe Experience Supervisor.
In May 2018, Adobe stated they would buy e-commerce services provider Magento Commerce from private equity firm Permira for $1.68 billion. This deal will help bolster its Experience Cloud business, which provides services including analytics, advertising, and marketing. The deal is closed on June 19, 2018.
In September 2018, Adobe announced its acquisition of marketing automation software company Marketo.
In October 2018, Adobe officially changed its name from Adobe Systems Incorporated to Adobe Inc.
In January 2019, Adobe announced its acquisition of 3D texturing company Allegorithmic.
In 2020, the annual Adobe Summit was canceled due to the COVID-19 pandemic. The event took place online and saw over 21 million total video views and over 2.2 million visits to the event website.
The software giant has imposed a ban on the political ads features on its digital advert sales platform as the United States presidential elections approach.
On November 9, 2020, Adobe announced it will spend US$1.5 billion to acquire Workfront, a provider of marketing collaboration software. The acquisition was completed in early December 2020.
On August 19, 2021, Adobe announced it had entered into a definitive agreement to acquire Frame.io, a leading cloud-based video collaboration platform. The transaction is valued at $1.275 billion and closed during the fourth quarter of Adobe’s 2021 fiscal year.
On September 15, 2021, Adobe Inc formally announced that it will add payment services to its e-commerce platform this year, allowing merchants on their platform a method to accept payments including credit cards and PayPal.
Finances
Products
Digital Marketing Management Software
Adobe Marketing Cloud, Adobe Experience Manager (AEM 6.2), XML Documentation add-on (for AEM), Mixamo
Formats
Portable Document Format (PDF), PDF's predecessor PostScript, ActionScript, Shockwave Flash (SWF), Flash Video (FLV), and Filmstrip (.flm)
Web-hosted services
Adobe Color, Photoshop Express, Acrobat.com, Behance and Adobe Spark
3D and AR
Adobe Aero, Dimension, Mixamo, Substance 3D by Adobe
Adobe Renderer
Adobe Media Encoder
Adobe Stock
A microstock agency that presently provides over 57 million high-resolution, royalty-free images and videos available to license (via subscription or credit purchase methods). In 2015, Adobe acquired Fotolia, a stock content marketplace founded in 2005 by Thibaud Elziere, Oleg Tscheltzoff, and Patrick Chassany which operated in 23 countries. It is run as a stand-alone website.
Adobe Experience Platform
In March 2019, Adobe released its Adobe Experience Platform, which consists family of content, development, and customer relationship management products, with what it calls the "next generation" of its Sensei artificial intelligence and machine learning framework.
Reception
Since 2000, Fortune has recognized Adobe as one of the 100 Best Companies to Work For. In 2021, Adobe was ranked 16th . Glassdoor recognized Adobe as a Best Place to Work. In October 2021, Fast Company included Adobe on their Brands That Matter list. In October 2008, Adobe Systems Canada Inc. was named one of "Canada's Top 100 Employers" by Mediacorp Canada Inc. and was featured in Maclean's newsmagazine.
Adobe received a five-star rating from the Electronic Frontier Foundation with regards to its handling of government data requests in 2017.
Criticisms
Pricing
Adobe has been criticized for its pricing practices, with retail prices being up to twice as much in non-US countries. For example, it is significantly cheaper to pay for a return airfare ticket to the United States and purchase one particular collection of Adobe's software there than to buy it locally in Australia.
After Adobe revealed the pricing for the Creative Suite 3 Master Collection, which was £1,000 higher for European customers, a petition to protest over "unfair pricing" was published and signed by 10,000 users. In June 2009, Adobe further increased its prices in the UK by 10% in spite of weakening of the pound against the dollar, and UK users were not allowed to buy from the US store.
Adobe's Reader and Flash programs were listed on "The 10 most hated programs of all time" article by TechRadar.
In April 2021, Adobe received heavy criticism for the company’s cancellation fees after a customer shared a tweet showing they had been charged a $291.45 cancellation fee for their Adobe Creative Cloud subscription. Many also showed their cancellation fees for Adobe Creative Cloud, with this leading to many encouraging piracy of Adobe products and/or purchase of alternatives with lower prices.
Security
Hackers have exploited vulnerabilities in Adobe programs, such as Adobe Reader, to gain unauthorized access to computers. Adobe's Flash Player has also been criticized for, among other things, suffering from performance, memory usage and security problems (see criticism of Flash Player). A report by security researchers from Kaspersky Lab criticized Adobe for producing the products having top 10 security vulnerabilities.
Observers noted that Adobe was spying on its customers by including spyware in the Creative Suite 3 software and quietly sending user data to a firm named Omniture. When users became aware, Adobe explained what the suspicious software did and admitted that they: "could and should do a better job taking security concerns into account". When a security flaw was later discovered in Photoshop CS5, Adobe sparked outrage by saying it would leave the flaw unpatched, so anyone who wanted to use the software securely would have to pay for an upgrade. Following a fierce backlash Adobe decided to provide the software patch.
Adobe has been criticized for pushing unwanted software including third-party browser toolbars and free virus scanners, usually as part of the Flash update process, and for pushing a third-party scareware program designed to scare users into paying for unneeded system repairs.
Customer data breach
On October 3, 2013, the company initially revealed that 2.9 million customers' sensitive and personal data was stolen in security breach which included encrypted credit card information. Adobe later admitted that 38 million active users have been affected and the attackers obtained access to their IDs and encrypted passwords, as well as to many inactive Adobe accounts. The company did not make it clear if all the personal information was encrypted, such as email addresses and physical addresses, though data privacy laws in 44 states require this information to be encrypted.
A 3.8 GB file stolen from Adobe and containing 152 million usernames, reversibly encrypted passwords and unencrypted password hints was posted on AnonNews.org. LastPass, a password security firm, said that Adobe failed to use best practices for securing the passwords and has not salted them. Another security firm, Sophos, showed that Adobe used a weak encryption method permitting the recovery of a lot of information with very little effort. According to IT expert Simon Bain, Adobe has failed its customers and 'should hang their heads in shame'.
Many of the credit cards were tied to the Creative Cloud software-by-subscription service. Adobe offered its affected US customers a free membership in a credit monitoring service, but no similar arrangements have been made for non-US customers. When a data breach occurs in the US, penalties depend on the state where the victim resides, not where the company is based.
After stealing the customers' data, cyber-thieves also accessed Adobe's source code repository, likely in mid-August 2013. Because hackers acquired copies of the source code of Adobe proprietary products, they could find and exploit any potential weaknesses in its security, computer experts warned. Security researcher Alex Holden, chief information security officer of Hold Security, characterized this Adobe breach, which affected Acrobat, ColdFusion and numerous other applications, as "one of the worst in US history". Adobe also announced that hackers stole parts of the source code of Photoshop, which according to commentators could allow programmers to copy its engineering techniques and would make it easier to pirate Adobe's expensive products.
Published on a server of a Russian-speaking hacker group, the "disclosure of encryption algorithms, other security schemes, and software vulnerabilities can be used to bypass protections for individual and corporate data" and may have opened the gateway to new generation zero-day attacks. Hackers already used ColdFusion exploits to make off with usernames and encrypted passwords of PR Newswire's customers, which has been tied to the Adobe security breach. They also used a ColdFusion exploit to breach Washington state court and expose up to 200,000 Social Security numbers.
Anti-competitive practices
In 1994, Adobe acquired Aldus Corp., a software vendor that sold FreeHand, a competing product. FreeHand was direct competition to Adobe Illustrator, Adobe's flagship vector-graphics editor. The Federal Trade Commission intervened and forced Adobe to sell FreeHand back to Altsys, and also banned Adobe from buying back FreeHand or any similar program for the next 10 years (1994–2004). Altsys was then bought by Macromedia, which released versions 5 to 11. When Adobe acquired Macromedia in December 2005, it stalled development of FreeHand in 2007, effectively rendering it obsolete. With FreeHand and Illustrator, Adobe controlled the only two products that compete in the professional illustration program market for Macintosh operating systems.
In 2011, a group of 5,000 FreeHand graphic designers convened under the banner Free FreeHand, and filed a civil antitrust complaint in the US District Court for the Northern District of California against Adobe. The suit alleged that Adobe has violated federal and state antitrust laws by abusing its dominant position in the professional vector graphic illustration software market and that Adobe has engaged in a series of exclusionary and anti-competitive acts and strategies designed to kill FreeHand, the dominant competitor to Adobe's Illustrator software product, instead of competing on the basis of product merit according to the principals of free market capitalism. Adobe had no response to the claims and the lawsuit was eventually settled. The FreeHand community believes Adobe should release the product to an open-source community if it cannot update it internally.
, on its FreeHand product page, Adobe stated, "While we recognize FreeHand has a loyal customer base, we encourage users to migrate to the new Adobe Illustrator CS4 software which supports both PowerPC and Intel-based Macs and Microsoft Windows XP and Windows Vista." , the FreeHand page no longer exists; instead, it simply redirects to the Illustrator page. Adobe's software FTP server still contains a directory for FreeHand, but it is empty.
Chief executive officers
John Warnock (1982-2000)
Bruce Chizen (2000-2007)
Shantanu Narayen (2007–present)
See also
Adobe MAX
Digital rights management (DRM)
List of acquisitions by Adobe
List of Adobe software
US v. ElcomSoft Sklyarov
References
External links
1982 establishments in California
Companies based in San Jose, California
Companies listed on the Nasdaq
Multinational companies headquartered in the United States
Software companies based in the San Francisco Bay Area
Software companies established in 1982
Type foundries
American companies established in 1982
1980s initial public offerings
Software companies of the United States |
2112 | https://en.wikipedia.org/wiki/Associative%20algebra | Associative algebra | In mathematics, an associative algebra A is an algebraic structure with compatible operations of addition, multiplication (assumed to be associative), and a scalar multiplication by elements in some field. The addition and multiplication operations together give A the structure of a ring; the addition and scalar multiplication operations together give A the structure of a vector space over K. In this article we will also use the term [[algebra over a field|K-algebra]] to mean an associative algebra over the field K. A standard first example of a K-algebra is a ring of square matrices over a field K, with the usual matrix multiplication.
A commutative algebra is an associative algebra that has a commutative multiplication, or, equivalently, an associative algebra that is also a commutative ring.
In this article associative algebras are assumed to have a multiplicative identity, denoted 1; they are sometimes called unital associative algebras for clarification. In some areas of mathematics this assumption is not made, and we will call such structures non-unital associative algebras. We will also assume that all rings are unital, and all ring homomorphisms are unital.
Many authors consider the more general concept of an associative algebra over a commutative ring R, instead of a field: An R-algebra is an R-module with an associative R-bilinear binary operation, which also contains a multiplicative identity. For examples of this concept, if S is any ring with center C, then S is an associative C-algebra.
Definition
Let R be a commutative ring (so R could be a field). An associative R-algebra (or more simply, an R-algebra) is a ring
that is also an R-module in such a way that the two additions (the ring addition and the module addition) are the same operation, and scalar multiplication satisfies
for all r in R and x, y in the algebra. (This definition implies that the algebra is unital, since rings are supposed to have a multiplicative identity.)
Equivalently, an associative algebra A is a ring together with a ring homomorphism from R to the center of A. If f is such a homomorphism, the scalar multiplication is (here the multiplication is the ring multiplication); if the scalar multiplication is given, the ring homomorphism is given by (See also below).
Every ring is an associative -algebra, where denotes the ring of the integers.
A is an associative algebra that is also a commutative ring.
As a monoid object in the category of modules
The definition is equivalent to saying that a unital associative R-algebra is a monoid object in R-Mod (the monoidal category of R-modules). By definition, a ring is a monoid object in the category of abelian groups; thus, the notion of an associative algebra is obtained by replacing the category of abelian groups with the category of modules.
Pushing this idea further, some authors have introduced a "generalized ring" as a monoid object in some other category that behaves like the category of modules. Indeed, this reinterpretation allows one to avoid making an explicit reference to elements of an algebra A. For example, the associativity can be expressed as follows. By the universal property of a tensor product of modules, the multiplication (the R-bilinear map) corresponds to a unique R-linear map
.
The associativity then refers to the identity:
From ring homomorphisms
An associative algebra amounts to a ring homomorphism whose image lies in the center. Indeed, starting with a ring A and a ring homomorphism whose image lies in the center of A, we can make A an R-algebra by defining
for all r ∈ R and x ∈ A. If A is an R-algebra, taking x = 1, the same formula in turn defines a ring homomorphism whose image lies in the center.
If a ring is commutative then it equals its center, so that a commutative R-algebra can be defined simply as a commutative ring A together with a commutative ring homomorphism .
The ring homomorphism η appearing in the above is often called a structure map. In the commutative case, one can consider the category whose objects are ring homomorphisms R → A; i.e., commutative R-algebras and whose morphisms are ring homomorphisms A → A that are under R; i.e., R → A → A is R → A (i.e., the coslice category of the category of commutative rings under R.) The prime spectrum functor Spec then determines an anti-equivalence of this category to the category of affine schemes over Spec R.
How to weaken the commutativity assumption is a subject matter of noncommutative algebraic geometry and, more recently, of derived algebraic geometry. See also: generic matrix ring.
Algebra homomorphisms
A homomorphism between two R-algebras is an R-linear ring homomorphism. Explicitly, is an associative algebra homomorphism if
The class of all R-algebras together with algebra homomorphisms between them form a category, sometimes denoted R-Alg.
The subcategory of commutative R-algebras can be characterized as the coslice category R/CRing where CRing is the category of commutative rings.
Examples
The most basic example is a ring itself; it is an algebra over its center or any subring lying in the center. In particular, any commutative ring is an algebra over any of its subrings. Other examples abound both from algebra and other fields of mathematics.
Algebra
Any ring A can be considered as a Z-algebra. The unique ring homomorphism from Z to A is determined by the fact that it must send 1 to the identity in A. Therefore, rings and Z-algebras are equivalent concepts, in the same way that abelian groups and Z-modules are equivalent.
Any ring of characteristic n is a (Z/nZ)-algebra in the same way.
Given an R-module M, the endomorphism ring of M, denoted EndR(M) is an R-algebra by defining (r·φ)(x) = r·φ(x).
Any ring of matrices with coefficients in a commutative ring R forms an R-algebra under matrix addition and multiplication. This coincides with the previous example when M is a finitely-generated, free R-module.
In particular, the square n-by-n matrices with entries from the field K form an associative algebra over K.
The complex numbers form a 2-dimensional commutative algebra over the real numbers.
The quaternions form a 4-dimensional associative algebra over the reals (but not an algebra over the complex numbers, since the complex numbers are not in the center of the quaternions).
The polynomials with real coefficients form a commutative algebra over the reals.
Every polynomial ring R[x1, ..., xn] is a commutative R-algebra. In fact, this is the free commutative R-algebra on the set {x1, ..., xn}.
The free R-algebra on a set E is an algebra of "polynomials" with coefficients in R and noncommuting indeterminates taken from the set E.
The tensor algebra of an R-module is naturally an associative R-algebra. The same is true for quotients such as the exterior and symmetric algebras. Categorically speaking, the functor that maps an R-module to its tensor algebra is left adjoint to the functor that sends an R-algebra to its underlying R-module (forgetting the multiplicative structure).
The following ring is used in the theory of λ-rings. Given a commutative ring A, let the set of formal power series with constant term 1. It is an abelian group with the group operation that is the multiplication of power series. It is then a ring with the multiplication, denoted by , such that determined by this condition and the ring axioms. The additive identity is 1 and the multiplicative identity is . Then has a canonical structure of a -algebra given by the ring homomorphism On the other hand, if A is a λ-ring, then there is a ring homomorphism giving a structure of an A-algebra.
Representation theory
The universal enveloping algebra of a Lie algebra is an associative algebra that can be used to study the given Lie algebra.
If G is a group and R is a commutative ring, the set of all functions from G to R with finite support form an R-algebra with the convolution as multiplication. It is called the group algebra of G. The construction is the starting point for the application to the study of (discrete) groups.
If G is an algebraic group (e.g., semisimple complex Lie group), then the coordinate ring of G is the Hopf algebra A corresponding to G. Many structures of G translate to those of A.
A quiver algebra (or a path algebra) of a directed graph is the free associative algebra over a field generated by the paths in the graph.
Analysis
Given any Banach space X, the continuous linear operators A : X → X form an associative algebra (using composition of operators as multiplication); this is a Banach algebra.
Given any topological space X, the continuous real- or complex-valued functions on X form a real or complex associative algebra; here the functions are added and multiplied pointwise.
The set of semimartingales defined on the filtered probability space (Ω, F, (Ft)t ≥ 0, P) forms a ring under stochastic integration.
The Weyl algebra
An Azumaya algebra
Geometry and combinatorics
The Clifford algebras, which are useful in geometry and physics.
Incidence algebras of locally finite partially ordered sets are associative algebras considered in combinatorics.
Constructions
Subalgebras A subalgebra of an R-algebra A is a subset of A which is both a subring and a submodule of A. That is, it must be closed under addition, ring multiplication, scalar multiplication, and it must contain the identity element of A.
Quotient algebras Let A be an R-algebra. Any ring-theoretic ideal I in A is automatically an R-module since r · x = (r1A)x. This gives the quotient ring A / I the structure of an R-module and, in fact, an R-algebra. It follows that any ring homomorphic image of A is also an R-algebra.
Direct products The direct product of a family of R-algebras is the ring-theoretic direct product. This becomes an R-algebra with the obvious scalar multiplication.
Free products One can form a free product of R-algebras in a manner similar to the free product of groups. The free product is the coproduct in the category of R-algebras.
Tensor products The tensor product of two R-algebras is also an R-algebra in a natural way. See tensor product of algebras for more details. Given a commutative ring R and any ring A the tensor product R ⊗Z A can be given the structure of an R-algebra by defining r · (s ⊗ a) = (rs ⊗ a). The functor which sends A to R ⊗Z A is left adjoint to the functor which sends an R-algebra to its underlying ring (forgetting the module structure). See also: Change of rings.
Separable algebra
Let A be an algebra over a commutative ring R. Then the algebra A is a right module over with the action . Then, by definition, A is said to separable if the multiplication map splits as an -linear map, where is an -module by . Equivalently,
is separable if it is a projective module over ; thus, the -projective dimension of A, sometimes called the bidimension of A, measures the failure of separability.
Finite-dimensional algebra
Let A be a finite-dimensional algebra over a field k. Then A is an Artinian ring.
Commutative case
As A is Artinian, if it is commutative, then it is a finite product of Artinian local rings whose residue fields are algebras over the base field k. Now, a reduced Artinian local ring is a field and thus the following are equivalent
is separable.
is reduced, where is some algebraic closure of k.
for some n.
is the number of -algebra homomorphisms .
Noncommutative case
Since a simple Artinian ring is a (full) matrix ring over a division ring, if A is a simple algebra, then A is a (full) matrix algebra over a division algebra D over k; i.e., . More generally, if A is a semisimple algebra, then it is a finite product of matrix algebras (over various division k-algebras), the fact known as the Artin–Wedderburn theorem.
The fact that A is Artinian simplifies the notion of a Jacobson radical; for an Artinian ring, the Jacobson radical of A is the intersection of all (two-sided) maximal ideals (in contrast, in general, a Jacobson radical is the intersection of all left maximal ideals or the intersection of all right maximal ideals.)
The Wedderburn principal theorem states: for a finite-dimensional algebra A with a nilpotent ideal I, if the projective dimension of as an -module is at most one, then the natural surjection splits; i.e., contains a subalgebra such that is an isomorphism. Taking I to be the Jacobson radical, the theorem says in particular that the Jacobson radical is complemented by a semisimple algebra. The theorem is an analog of Levi's theorem for Lie algebras.
Lattices and orders
Let R be a Noetherian integral domain with field of fractions K (for example, they can be ). A lattice L in a finite-dimensional K-vector space V is a finitely generated R-submodule of V that spans V; in other words, .
Let be a finite-dimensional K-algebra. An order in is an R-subalgebra that is a lattice. In general, there are a lot fewer orders than lattices; e.g., is a lattice in but not an order (since it is not an algebra).
A maximal order is an order that is maximal among all the orders.
Related concepts
Coalgebras
An associative algebra over K is given by a K-vector space A endowed with a bilinear map A × A → A having two inputs (multiplicator and multiplicand) and one output (product), as well as a morphism K → A identifying the scalar multiples of the multiplicative identity. If the bilinear map A × A → A is reinterpreted as a linear map (i. e., morphism in the category of K-vector spaces) A ⊗ A → A (by the universal property of the tensor product), then we can view an associative algebra over K as a K-vector space A endowed with two morphisms (one of the form A ⊗ A → A and one of the form K → A) satisfying certain conditions that boil down to the algebra axioms. These two morphisms can be dualized using categorial duality by reversing all arrows in the commutative diagrams that describe the algebra axioms; this defines the structure of a coalgebra.
There is also an abstract notion of F-coalgebra, where F is a functor. This is vaguely related to the notion of coalgebra discussed above.
Representations
A representation of an algebra A is an algebra homomorphism ρ : A → End(V) from A to the endomorphism algebra of some vector space (or module) V. The property of ρ being an algebra homomorphism means that ρ preserves the multiplicative operation (that is, ρ(xy) = ρ(x)ρ(y) for all x and y in A), and that ρ sends the unit of A to the unit of End(V) (that is, to the identity endomorphism of V).
If A and B are two algebras, and ρ : A → End(V) and τ : B → End(W) are two representations, then there is a (canonical) representation A B → End(V W) of the tensor product algebra A B on the vector space V W. However, there is no natural way of defining a tensor product of two representations of a single associative algebra in such a way that the result is still a representation of that same algebra (not of its tensor product with itself), without somehow imposing additional conditions. Here, by tensor product of representations, the usual meaning is intended: the result should be a linear representation of the same algebra on the product vector space. Imposing such additional structure typically leads to the idea of a Hopf algebra or a Lie algebra, as demonstrated below.
Motivation for a Hopf algebra
Consider, for example, two representations and . One might try to form a tensor product representation according to how it acts on the product vector space, so that
However, such a map would not be linear, since one would have
for k ∈ K. One can rescue this attempt and restore linearity by imposing additional structure, by defining an algebra homomorphism Δ: A → A ⊗ A, and defining the tensor product representation as
Such a homomorphism Δ is called a comultiplication if it satisfies certain axioms. The resulting structure is called a bialgebra. To be consistent with the definitions of the associative algebra, the coalgebra must be co-associative, and, if the algebra is unital, then the co-algebra must be co-unital as well. A Hopf algebra is a bialgebra with an additional piece of structure (the so-called antipode), which allows not only to define the tensor product of two representations, but also the Hom module of two representations (again, similarly to how it is done in the representation theory of groups).
Motivation for a Lie algebra
One can try to be more clever in defining a tensor product. Consider, for example,
so that the action on the tensor product space is given by
.
This map is clearly linear in x, and so it does not have the problem of the earlier definition. However, it fails to preserve multiplication:
.
But, in general, this does not equal
.
This shows that this definition of a tensor product is too naive; the obvious fix is to define it such that it is antisymmetric, so that the middle two terms cancel. This leads to the concept of a Lie algebra.
Non-unital algebras
Some authors use the term "associative algebra" to refer to structures which do not necessarily have a multiplicative identity, and hence consider homomorphisms which are not necessarily unital.
One example of a non-unital associative algebra is given by the set of all functions f: R → R' whose limit as x nears infinity is zero.
Another example is the vector space of continuous periodic functions, together with the convolution product.
See also
Abstract algebra
Algebraic structure
Algebra over a field
Sheaf of algebras, a sort of an algebra over a ringed space
Notes
References
Nathan Jacobson, Structure of Rings
James Byrnie Shaw (1907) A Synopsis of Linear Associative Algebra, link from Cornell University Historical Math Monographs.
Ross Street (1998) Quantum Groups: an entrée to modern algebra'', an overview of index-free notation.
Algebras
Algebraic geometry |
2114 | https://en.wikipedia.org/wiki/IBM%20AIX | IBM AIX | AIX (Advanced Interactive eXecutive, pronounced , “ay-eye-ex”) is a series of proprietary Unix operating systems developed and sold by IBM for several of its computer platforms. Originally released for the IBM RT PC RISC workstation in 1986, AIX has supported a wide variety of hardware platforms, including the IBM RS/6000 series and later Power and PowerPC-based systems, IBM System i, System/370 mainframes, PS/2 personal computers, and the Apple Network Server. It is currently supported on IBM Power Systems alongside IBM i and Linux.
AIX is based on UNIX System V with 4.3BSD-compatible extensions. It is certified to the UNIX 03 and UNIX V7 marks of the Single UNIX Specification, beginning with AIX versions 5.3 and 7.2 TL5 respectively. Older versions were previously certified to the UNIX 95 and UNIX 98 marks.
AIX was the first operating system to have a journaling file system, and IBM has continuously enhanced the software with features such as processor, disk and network virtualization, dynamic hardware resource allocation (including fractional processor units), and reliability engineering ported from its mainframe designs.
History
Unix started life at AT&T's Bell Labs research center in the early 1970s, running on DEC minicomputers. By 1976, the operating system was in use at various academic institutions, including Princeton, where Tom Lyon and others ported it to the S/370, to run as a guest OS under VM/370. This port would later grow out to become UTS, a mainframe Unix offering by IBM's competitor Amdahl Corporation.
IBM's own involvement in Unix can be dated to 1979, when it assisted Bell Labs in doing its own Unix port to the 370 (to be used as a build host for the 5ESS switch's software). In the process, IBM made modifications to the TSS/370 hypervisor to better support Unix.
It took until 1985 for IBM to offer its own Unix on the S/370 platform, IX/370, which was developed by Interactive Systems Corporation and intended by IBM to compete with Amdahl UTS. The operating system offered special facilities for interoperating with PC/IX, Interactive/IBM's version of Unix for IBM PC compatible hardware, and was licensed at $10,000 per sixteen concurrent users.
AIX Version 1, introduced in 1986 for the IBM RT PC workstation, was based on UNIX System V Releases 1 and 2. In developing AIX, IBM and Interactive Systems Corporation (whom IBM contracted) also incorporated source code from 4.2 and 4.3 BSD UNIX.
Among other variants, IBM later produced AIX Version 3 (also known as AIX/6000), based on System V Release 3, for their POWER-based RS/6000 platform. Since 1990, AIX has served as the primary operating system for the RS/6000 series (later renamed IBM eServer pSeries, then IBM System p, and now IBM Power Systems). AIX Version 4, introduced in 1994, added symmetric multiprocessing with the introduction of the first RS/6000 SMP servers and continued to evolve through the 1990s, culminating with AIX 4.3.3 in 1999. Version 4.1, in a slightly modified form, was also the standard operating system for the Apple Network Server systems sold by Apple Computer to complement the Macintosh line.
In the late 1990s, under Project Monterey, IBM and the Santa Cruz Operation planned to integrate AIX and UnixWare into a single 32-bit/64-bit multiplatform UNIX with particular emphasis on running on Intel IA-64 (Itanium) architecture CPUs. A beta test version of AIX 5L for IA-64 systems was released, but according to documents released in the SCO v. IBM lawsuit, less than forty licenses for the finished Monterey Unix were ever sold before the project was terminated in 2002. In 2003, the SCO Group alleged that (among other infractions) IBM had misappropriated licensed source code from UNIX System V Release 4 for incorporation into AIX; SCO subsequently withdrew IBM's license to develop and distribute AIX. IBM maintains that their license was irrevocable, and continued to sell and support the product until the litigation was adjudicated.
AIX was a component of the 2003 SCO v. IBM lawsuit, in which the SCO Group filed a lawsuit against IBM, alleging IBM contributed SCO's intellectual property to the Linux codebase. The SCO Group, who argued they were the rightful owners of the copyrights covering the Unix operating system, attempted to revoke IBM's license to sell or distribute the AIX operating system. In March 2010, a jury returned a verdict finding that Novell, not the SCO Group, owns the rights to Unix.
AIX 6 was announced in May 2007, and it ran as an open beta from June 2007 until the general availability (GA) of AIX 6.1 on November 9, 2007. Major new features in AIX 6.1 included full role-based access control, workload partitions (which enable application mobility), enhanced security (Addition of AES encryption type for NFS v3 and v4), and Live Partition Mobility on the POWER6 hardware.
AIX 7.1 was announced in April 2010, and an open beta ran until general availability of AIX 7.1 in September 2010. Several new features, including better scalability, enhanced clustering and management capabilities were added. AIX 7.1 includes a new built-in clustering capability called Cluster Aware AIX. AIX is able to organize multiple LPARs through the multipath communications channel to neighboring CPUs, enabling very high-speed communication between processors. This enables multi-terabyte memory address range and page table access to support global petabyte shared memory space for AIX POWER7 clusters so that software developers can program a cluster as if it were a single system, without using message passing (i.e. semaphore-controlled Inter-process Communication). AIX administrators can use this new capability to cluster a pool of AIX nodes. By default, AIX V7.1 pins kernel memory and includes support to allow applications to pin their kernel stack. Pinning kernel memory and the kernel stack for applications with real-time requirements can provide performance improvements by ensuring that the kernel memory and kernel stack for an application is not paged out.
AIX 7.2 was announced in October 2015, and released in December 2015. The principal feature of AIX 7.2 is the Live Kernel Update capability, which allows OS fixes to replace the entire AIX kernel with no impact to applications, by live migrating workloads to a temporary surrogate AIX OS partition while the original OS partition is patched. AIX 7.2 was also restructured to remove obsolete components. The networking component, bos.net.tcp.client was repackaged to allow additional installation flexibility. Unlike AIX 7.1, AIX 7.2 is only supported on systems based on POWER7 or later processors.
Supported hardware platforms
IBM RT PC
The original AIX (sometimes called AIX/RT) was developed for the IBM RT PC workstation by IBM in conjunction with Interactive Systems Corporation, who had previously ported UNIX System III to the IBM PC for IBM as PC/IX. According to its developers, the AIX source (for this initial version) consisted of one million lines of code. Installation media consisted of eight 1.2M floppy disks. The RT was based on the IBM ROMP microprocessor, the first commercial RISC chip. This was based on a design pioneered at IBM Research (the IBM 801) .
One of the novel aspects of the RT design was the use of a microkernel, called Virtual Resource Manager (VRM). The keyboard, mouse, display, disk drives and network were all controlled by a microkernel. One could "hotkey" from one operating system to the next using the Alt-Tab key combination. Each OS in turn would get possession of the keyboard, mouse and display. Besides AIX v2, the PICK OS also included this microkernel.
Much of the AIX v2 kernel was written in the PL/8 programming language, which proved troublesome during the migration to AIX v3. AIX v2 included full TCP/IP networking, as well as SNA and two networking file systems: NFS, licensed from Sun Microsystems, and Distributed Services (DS). DS had the distinction of being built on top of SNA, and thereby being fully compatible with DS on and on midrange systems running OS/400 through IBM i. For the graphical user interfaces, AIX v2 came with the X10R3 and later the X10R4 and X11 versions of the X Window System from MIT, together with the Athena widget set. Compilers for Fortran and C were available.
IBM PS/2 series
AIX PS/2 (also known as AIX/386) was developed by Locus Computing Corporation under contract to IBM. AIX PS/2, first released in October 1988, ran on IBM PS/2 personal computers with Intel 386 and compatible processors.
The product was announced in September 1988 with a baseline tag price of $595, although some utilities like uucp were included in a separate Extension package priced at $250. nroff and troff for AIX were also sold separately in a Text Formatting System package priced at $200. The TCP/IP stack for AIX PS/2 retailed for another $300. The X Window package was priced at $195, and featured a graphical environment called the AIXwindows Desktop, based on IXI's X.desktop. The C and FORTRAN compilers each had a price tag of $275. Locus also made available their DOS Merge virtual machine environment for AIX, which could run MS DOS 3.3 applications inside AIX; DOS Merge was sold separately for another $250. IBM also offered a $150 AIX PS/2 DOS Server Program, which provided file server and print server services for client computers running PC DOS 3.3.
The last version of PS/2 AIX is 1.3. It was released in 1992 and announced to add support for non-IBM (non-microchannel) computers as well. Support for PS/2 AIX ended in March 1995.
IBM mainframes
In 1988, IBM announced AIX/370, also developed by Locus Computing. AIX/370 was IBM's fourth attempt to offer Unix-like functionality for their mainframe line, specifically the System/370 (the prior versions were a TSS/370-based Unix system developed jointly with AT&T c.1980, a VM/370-based system named VM/IX developed jointly with Interactive Systems Corporation c.1984, and a VM/370-based version of TSS/370 named IX/370 which was upgraded to be compatible with Unix System V). AIX/370 was released in 1990 with functional equivalence to System V Release 2 and 4.3BSD as well as IBM enhancements. With the introduction of the ESA/390 architecture, AIX/370 was replaced by AIX/ESA in 1991, which was based on OSF/1, and also ran on the System/390 platform. This development effort was made partly to allow IBM to compete with Amdahl UTS. Unlike AIX/370, AIX/ESA ran both natively as the host operating system, and as a guest under VM. AIX/ESA, while technically advanced, had little commercial success, partially because UNIX functionality was added as an option to the existing mainframe operating system, MVS, as MVS/ESA SP Version 4 Release 3 OpenEdition in 1994, and continued as an integral part of MVS/ESA SP Version 5, OS/390 and z/OS, with the name eventually changing from OpenEdition to Unix System Services. IBM also provided OpenEdition in VM/ESA Version 2 through z/VM.
IA-64 systems
As part of Project Monterey, IBM released a beta test version of AIX 5L for the IA-64 (Itanium) architecture in 2001, but this never became an official product due to lack of interest.
Apple Network Servers
The Apple Network Server (ANS) systems were PowerPC-based systems designed by Apple Computer to have numerous high-end features that standard Apple hardware did not have, including swappable hard drives, redundant power supplies, and external monitoring capability. These systems were more or less based on the Power Macintosh hardware available at the time but were designed to use AIX (versions 4.1.4 or 4.1.5) as their native operating system in a specialized version specific to the ANS called AIX for Apple Network Servers.
AIX was only compatible with the Network Servers and was not ported to standard Power Macintosh hardware. It should not be confused with A/UX, Apple's earlier version of Unix for 68k-based Macintoshes.
POWER ISA/PowerPC/Power ISA-based systems
The release of AIX version 3 (sometimes called AIX/6000) coincided with the announcement of the first POWER1-based IBM RS/6000 models in 1990.
AIX v3 innovated in several ways on the software side. It was the first operating system to introduce the idea of a journaling file system, JFS, which allowed for fast boot times by avoiding the need to ensure the consistency of the file systems on disks (see fsck) on every reboot. Another innovation was shared libraries which avoid the need for static linking from an application to the libraries it used. The resulting smaller binaries used less of the hardware RAM to run, and used less disk space to install. Besides improving performance, it was a boon to developers: executable binaries could be in the tens of kilobytes instead of a megabyte for an executable statically linked to the C library. AIX v3 also scrapped the microkernel of AIX v2, a contentious move that resulted in v3 containing no PL/8 code and being somewhat more "pure" than v2.
Other notable subsystems included:
IRIS GL, a 3D rendering library, the progenitor of OpenGL. IRIS GL was licensed by IBM from SGI in 1987, then still a fairly small company, which had sold only a few thousand machines at the time. SGI also provided the low-end graphics card for the RS/6000, capable of drawing 20,000 gouraud-shaded triangles per second. The high-end graphics card was designed by IBM, a follow-on to the mainframe-attached IBM 5080, capable of rendering 990,000 vectors per second.
PHIGS, another 3D rendering API, popular in automotive CAD/CAM circles, and at the core of CATIA.
Full implementation of version 11 of the X Window System, together with Motif as the recommended widget collection and window manager.
Network file systems: NFS from Sun; AFS, the Andrew File System; and DFS, the Distributed File System.
NCS, the Network Computing System, licensed from Apollo Computer (later acquired by HP).
DPS on-screen display system. This was notable as a "plan B" in case the X11+Motif combination failed in the marketplace. However, it was highly proprietary, supported only by Sun, NeXT, and IBM. This cemented its failure in the marketplace in the face of the open systems challenge of X11+Motif and its lack of 3D capability.
, AIX runs on IBM Power, System p, System i, System p5, System i5, eServer p5, eServer pSeries and eServer i5 server product lines, as well as IBM BladeCenter blades and IBM PureFlex compute nodes. In addition, AIX applications can run in the PASE subsystem under IBM i.
POWER7 AIX features
AIX 7.1 fully exploits systems based on POWER7 processors include the Active Memory Expansion (AME) feature, which increases system flexibility where system administrators can configure logical partitions (LPARs) to use less physical memory. For example, an LPAR running AIX appears to the OS applications to be configured with 80 GB of physical memory but the hardware actually only consumes 60 GB of physical memory. Active Memory Expansion is a virtual memory compression system which employs memory compression technology to transparently compress in-memory data, allowing more data to be placed into memory and thus expanding the memory capacity of POWER7 systems. Using Active Memory Expansion can improve system use and increase a system's throughput. AIX 7 automatically manages the size of memory pages used to automatically use 4 KB, 64 KB or a combination of those page sizes. This self-tuning feature results in optimized performance without administrative effort.
POWER8 AIX features
AIX 7.2 exploits POWER8 hardware features including accelerators and eight-way hardware multithreading.
POWER9 AIX features
AIX 7.2 exploits POWER9 secure boot technology.
Versions
Version history
POWER/PowerPC releases
AIX V7.3, December 10, 2021
AIX V7.2, October 5, 2015
Live update for Interim Fixes, Service Packs and Technology Levels replaces the entire AIX kernel without impacting applications
Flash based filesystem caching
Cluster Aware AIX automation with repository replacement mechanism
SRIOV-backed VNIC, or dedicated VNIC virtualized network adapter support
RDSv3 over RoCE adds support of the Oracle RDSv3 protocol over the Mellanox Connect RoCE adapters
Requires POWER7 or newer CPUs
AIX V7.1, September 10, 2010
Support for 256 cores / 1024 threads in a single LPAR
The ability to run AIX V5.2 or V5.3 inside of a Workload Partition
An XML profile based system configuration management utility
Support for export of Fibre Channel adapters to WPARs
VIOS disk support in a WPAR
Cluster Aware AIX
AIX Event infrastructure
Role-based access control (RBAC) with domain support for multi-tenant environments
AIX V6.1, November 9, 2007
Workload Partitions (WPARs) operating system-level virtualization
Live Application Mobility
Live Partition Mobility
Security
Role Based Access Control RBAC
AIX Security Expert a system and network security hardening tool
Encrypting JFS2 filesystem
Trusted AIX
Trusted Execution
Integrated Electronic Service Agent for auto error reporting
Concurrent Kernel Maintenance
Kernel exploitation of POWER6 storage keys
ProbeVue dynamic tracing
Systems Director Console for AIX
Integrated filesystem snapshot
Requires POWER4 or newer CPUs
AIX 6 withdrawn from Marketing effective April 2016 and from Support effective April 2017
AIX 5L 5.3, August 13, 2004, end of support April 30, 2012
NFS Version 4
Advanced Accounting
Virtual SCSI
Virtual Ethernet
Exploitation of Simultaneous multithreading (SMT)
Micro-Partitioning enablement
POWER5 exploitation
JFS2 quotas
Ability to shrink a JFS2 filesystem
Kernel scheduler has been enhanced to dynamically increase and decrease the use of virtual processors.
AIX 5L 5.2, October 18, 2002, end of support April 30, 2009
Ability to run on the IBM BladeCenter JS20 with the PowerPC 970
Minimum level required for POWER5 hardware
MPIO for Fibre Channel disks
iSCSI Initiator software
Participation in Dynamic LPAR
Concurrent I/O (CIO) feature introduced for JFS2 released in Maintenance Level 01 in May 2003
AIX 5L 5.1, May 4, 2001, end of support April 1, 2006
Ability to run on an IA-64 architecture processor, although this never went beyond beta.
Minimum level required for POWER4 hardware and the last release that worked on the Micro Channel architecture
64-bit kernel, installed but not activated by default
JFS2
Ability to run in a Logical Partition on POWER4
The L stands for Linux affinity
Trusted Computing Base (TCB)
Support for mirroring with striping
AIX 4.3.3, September 17, 1999
Online backup function
Workload Manager (WLM)
Introduction of topas utility
AIX 4.3.2, October 23, 1998
AIX 4.3.1, April 24, 1998
First TCSEC security evaluation, completed December 18, 1998
AIX 4.3, October 31, 1997
Ability to run on 64-bit architecture CPUs
IPv6
Web-based System Manager
AIX 4.2.1, April 25, 1997
NFS Version 3
Y2K-compliant
AIX 4.2, May 17, 1996
AIX 4.1.5, November 8, 1996
AIX 4.1.4, October 20, 1995
AIX 4.1.3, July 7, 1995
CDE 1.0 became the default GUI environment, replacing the AIXwindows Desktop.
AIX 4.1.1, October 28, 1994
AIX 4.1, August 12, 1994
AIX Ultimedia Services introduced (multimedia drivers and applications)
AIX 4.0, 1994
Run on RS/6000 systems with PowerPC processors and PCI busses.
AIX 3.2 1992
AIX 3.1, (General Availability) February 1990
Journaled File System (JFS) filesystem type
AIXwindows Desktop (based on X.desktop from IXI Limited)
AIX 3.0 1989 (Early Access)
LVM (Logical Volume Manager) was incorporated into OSF/1, and in 1995 for HP-UX, and the Linux LVM implementation is similar to the HP-UX LVM implementation.
SMIT was introduced.
IBM System/370 releases
AIX/370 Version 1 Release 1
Announced March 15, 1988
Available February 16, 1989
AIX/370 Version 1 Release 2.1
Announced February 5, 1991
Available February February 22, 1991
Withdrawn December 31, 1992
AIX/ESA Version 2 Release 1
Announced March 31, 1992
Available June 26, 1992
Withdrawn Jun 19, 1993
AIX/ESA Version 2 Release 2
Announced December 15, 1992
Available February 26, 1993
Withdrawn Jun 19, 1993
IBM PS/2 releases
AIX PS/2 v1.3, October 1992
Withdrawn from sale in US, March 1995
Patches supporting IBM ThinkPad 750C family of notebook computers, 1994
Patches supporting non PS/2 hardware and systems, 1993
AIX PS/2 v1.2.1, May 1991
AIX PS/2 v1.2, March 1990
AIX PS/2 v1.1, March 1989
AIX PS/2 (1–16 User Option) $795
AIX PS/2 (1–2 User Option) 595
AIX PS/2 Extensions 275
AIX PS/2 DOS Merge 275
AIX PS/2 Usability Services 275
AIX PS/2 Text Formatting System 220
AIX PS/2 X-Windows 214
AIX PS/2 VS FORTRAN 302
AIX PS/2 VS Pascal 302
AIX PS/2 C Language 302
AIX PS/2 Application
Development Toolkit 192
AIX PS/2 Workstation
Host Interface Program 441
AIX PS/2 Transmission Control
Protocol/Internet Protocol (TCP/IP) 330
AIX PS/2 INmail (1)/INed (2)/INnet (1)/FTP 275
AIX Access for DOS Users 164
X-Windows for IBM DOS 214
IBM RT releases
AIX RT v2.2.1, March 1991
AIX RT v2.2, March 1990
AIX RT v2.1, March 1989
X-Windows included on installation media
AIX RT v1.1, 1986
User interfaces
The default shell was Bourne shell up to AIX version 3, but was changed to KornShell (ksh88) in version 4 for XPG4 and POSIX compliance.
Graphical
The Common Desktop Environment (CDE) is AIX's default graphical user interface. As part of Linux Affinity and the free AIX Toolbox for Linux Applications (ATLA), open-source KDE Plasma Workspaces and GNOME desktop are also available.
System Management Interface Tool
SMIT is the System Management Interface Tool for AIX. It allows a user to navigate a menu hierarchy of commands, rather than using the command line. Invocation is typically achieved with the command smit. Experienced system administrators make use of the F6 function key which generates the command line that SMIT will invoke to complete it.
SMIT also generates a log of commands that are performed in the smit.script file. The smit.script file automatically records the commands with the command flags and parameters used. The smit.script file can be used as an executable shell script to rerun system configuration tasks. SMIT also creates the smit.log file, which contains additional detailed information that can be used by programmers in extending the SMIT system.
smit and smitty refer to the same program, though smitty invokes the text-based version, while smit will invoke an X Window System based interface if possible; however, if smit determines that X Window System capabilities are not present, it will present the text-based version instead of failing. Determination of X Window System capabilities is typically performed by checking for the existence of the DISPLAY variable.
Database
Object Data Manager (ODM) is a database of system information integrated into AIX, analogous to the registry in Microsoft Windows. A good understanding of the ODM is essential for managing AIX systems.
Data managed in ODM is stored and maintained as objects with associated attributes. Interaction with ODM is possible via application programming interface (API) library for programs, and command-line utilities such us odmshow, odmget, odmadd, odmchange and odmdelete for shell scripts and users. SMIT and its associated AIX commands can also be used to query and modify information in the ODM.
Example of information stored in the ODM database are:
Network configuration
Logical volume management configuration
Installed software information
Information for logical devices or software drivers
List of all AIX supported devices
Physical hardware devices installed and their configuration
Menus, screens and commands that SMIT uses
See also
AOS, IBM's educational-market port of 4.3BSD
IBM PowerHA SystemMirror (formerly HACMP)
List of Unix systems
nmon
Operating systems timeline
Service Update Management Assistant
Vital Product Data (VPD)
References
External links
IBM AIX
IBM operating systems
Power ISA operating systems
PowerPC operating systems
IBM Aix
Object-oriented database management systems
1986 software |
2807 | https://en.wikipedia.org/wiki/Active%20Directory | Active Directory | Active Directory (AD) is a directory service developed by Microsoft for Windows domain networks. It is included in most Windows Server operating systems as a set of processes and services. Initially, Active Directory was used only for centralized domain management. However, Active Directory eventually became an umbrella title for a broad range of directory-based identity-related services.
A server running the Active Directory Domain Service (AD DS) role is called a domain controller. It authenticates and authorizes all users and computers in a Windows domain type network, assigning and enforcing security policies for all computers, and installing or updating software. For example, when a user logs into a computer that is part of a Windows domain, Active Directory checks the submitted username and password and determines whether the user is a system administrator or normal user. Also, it allows management and storage of information, provides authentication and authorization mechanisms, and establishes a framework to deploy other related services: Certificate Services, Active Directory Federation Services, Lightweight Directory Services, and Rights Management Services.
Active Directory uses Lightweight Directory Access Protocol (LDAP) versions 2 and 3, Microsoft's version of Kerberos, and DNS.
History
Like many information-technology efforts, Active Directory originated out of a democratization of design using Request for Comments (RFCs). The Internet Engineering Task Force (IETF), which oversees the RFC process, has accepted numerous RFCs initiated by widespread participants. For example, LDAP underpins Active Directory. Also X.500 directories and the Organizational Unit preceded the Active Directory concept that makes use of those methods. The LDAP concept began to emerge even before the founding of Microsoft in April 1975, with RFCs as early as 1971. RFCs contributing to LDAP include RFC 1823 (on the LDAP API, August 1995), RFC 2307, RFC 3062, and RFC 4533.
Microsoft previewed Active Directory in 1999, released it first with Windows 2000 Server edition, and revised it to extend functionality and improve administration in Windows Server 2003. Active Directory support was also added to Windows 95, Windows 98 and Windows NT 4.0 via patch, with some features being unsupported. Additional improvements came with subsequent versions of Windows Server. In Windows Server 2008, additional services were added to Active Directory, such as Active Directory Federation Services. The part of the directory in charge of management of domains, which was previously a core part of the operating system, was renamed Active Directory Domain Services (ADDS) and became a server role like others. "Active Directory" became the umbrella title of a broader range of directory-based services. According to Byron Hynes, everything related to identity was brought under Active Directory's banner.
Active Directory Services
Active Directory Services consist of multiple directory services. The best known is Active Directory Domain Services, commonly abbreviated as AD DS or simply AD.
Domain Services
Active Directory Domain Services (AD DS) is the foundation stone of every Windows domain network. It stores information about members of the domain, including devices and users, verifies their credentials and defines their access rights. The server running this service is called a domain controller. A domain controller is contacted when a user logs into a device, accesses another device across the network, or runs a line-of-business Metro-style app sideloaded into a device.
Other Active Directory services (excluding LDS, as described below) as well as most of Microsoft server technologies rely on or use Domain Services; examples include Group Policy, Encrypting File System, BitLocker, Domain Name Services, Remote Desktop Services, Exchange Server and SharePoint Server.
The self-managed AD DS must not be confused with managed Azure AD DS, which is a cloud product.
Lightweight Directory Services
Active Directory Lightweight Directory Services (AD LDS), formerly known as Active Directory Application Mode (ADAM), is an implementation of LDAP protocol for AD DS. AD LDS runs as a service on Windows Server. AD LDS shares the code base with AD DS and provides the same functionality, including an identical API, but does not require the creation of domains or domain controllers. It provides a Data Store for storage of directory data and a Directory Service with an LDAP Directory Service Interface. Unlike AD DS, however, multiple AD LDS instances can run on the same server.
Certificate Services
Active Directory Certificate Services (AD CS) establishes an on-premises public key infrastructure. It can create, validate and revoke public key certificates for internal uses of an organization. These certificates can be used to encrypt files (when used with Encrypting File System), emails (per S/MIME standard), and network traffic (when used by virtual private networks, Transport Layer Security protocol or IPSec protocol).
AD CS predates Windows Server 2008, but its name was simply Certificate Services.
AD CS requires an AD DS infrastructure.
Federation Services
Active Directory Federation Services (AD FS) is a single sign-on service. With an AD FS infrastructure in place, users may use several web-based services (e.g. internet forum, blog, online shopping, webmail) or network resources using only one set of credentials stored at a central location, as opposed to having to be granted a dedicated set of credentials for each service. AD FS uses many popular open standards to pass token credentials such as SAML, OAuth or OpenID Connect. AD FS supports encryption and signing of SAML assertions. AD FS's purpose is an extension of that of AD DS: The latter enables users to authenticate with and use the devices that are part of the same network, using one set of credentials. The former enables them to use the same set of credentials in a different network.
As the name suggests, AD FS works based on the concept of federated identity.
AD FS requires an AD DS infrastructure, although its federation partner may not.
Rights Management Services
Active Directory Rights Management Services (AD RMS, known as Rights Management Services or RMS before Windows Server 2008) is a server software for information rights management shipped with Windows Server. It uses encryption and a form of selective functionality denial for limiting access to documents such as corporate e-mails, Microsoft Word documents, and web pages, and the operations authorized users can perform on them. These operations can include viewing, editing, copying, saving as or printing for example. IT administrators can create pre-set templates for the convenience of the end user if required. However, end users can still define who can access the content in question and set what they can do.
Logical structure
As a directory service, an Active Directory instance consists of a database and corresponding executable code responsible for servicing requests and maintaining the database. The executable part, known as Directory System Agent, is a collection of Windows services and processes that run on Windows 2000 and later. Objects in Active Directory databases can be accessed via LDAP, ADSI (a component object model interface), messaging API and Security Accounts Manager services.
Objects
Active Directory structures are arrangements of information about objects. The objects fall into two broad categories: resources (e.g., printers) and security principals (user or computer accounts and groups). Security principals are assigned unique security identifiers (SIDs).
Each object represents a single entity—whether a user, a computer, a printer, or a group—and its attributes. Certain objects can contain other objects. An object is uniquely identified by its name and has a set of attributes—the characteristics and information that the object represents— defined by a schema, which also determines the kinds of objects that can be stored in Active Directory.
The schema object lets administrators extend or modify the schema when necessary. However, because each schema object is integral to the definition of Active Directory objects, deactivating or changing these objects can fundamentally change or disrupt a deployment. Schema changes automatically propagate throughout the system. Once created, an object can only be deactivated—not deleted. Changing the schema usually requires planning.
Forests, trees, and domains
The Active Directory framework that holds the objects can be viewed at a number of levels. The forest, tree, and domain are the logical divisions in an Active Directory network.
Within a deployment, objects are grouped into domains. The objects for a single domain are stored in a single database (which can be replicated). Domains are identified by their DNS name structure, the namespace.
A domain is defined as a logical group of network objects (computers, users, devices) that share the same Active Directory database.
A tree is a collection of one or more domains and domain trees in a contiguous namespace, and is linked in a transitive trust hierarchy.
At the top of the structure is the forest. A forest is a collection of trees that share a common global catalog, directory schema, logical structure, and directory configuration. The forest represents the security boundary within which users, computers, groups, and other objects are accessible.
Organizational units
The objects held within a domain can be grouped into organizational units (OUs). OUs can provide hierarchy to a domain, ease its administration, and can resemble the organization's structure in managerial or geographical terms. OUs can contain other OUs—domains are containers in this sense. Microsoft recommends using OUs rather than domains for structure and to simplify the implementation of policies and administration. The OU is the recommended level at which to apply group policies, which are Active Directory objects formally named group policy objects (GPOs), although policies can also be applied to domains or sites (see below). The OU is the level at which administrative powers are commonly delegated, but delegation can be performed on individual objects or attributes as well.
Organizational units do not each have a separate namespace. As a consequence, for compatibility with Legacy NetBios implementations, user accounts with an identical sAMAccountName are not allowed within the same domain even if the accounts objects are in separate OUs. This is because sAMAccountName, a user object attribute, must be unique within the domain. However, two users in different OUs can have the same common name (CN), the name under which they are stored in the directory itself such as "fred.staff-ou.domain" and "fred.student-ou.domain", where "staff-ou" and "student-ou" are the OUs.
In general the reason for this lack of allowance for duplicate names through hierarchical directory placement is that Microsoft primarily relies on the principles of NetBIOS, which is a flat-namespace method of network object management that, for Microsoft software, goes all the way back to Windows NT 3.1 and MS-DOS LAN Manager. Allowing for duplication of object names in the directory, or completely removing the use of NetBIOS names, would prevent backward compatibility with legacy software and equipment. However, disallowing duplicate object names in this way is a violation of the LDAP RFCs on which Active Directory is supposedly based.
As the number of users in a domain increases, conventions such as "first initial, middle initial, last name" (Western order) or the reverse (Eastern order) fail for common family names like Li (李), Smith or Garcia. Workarounds include adding a digit to the end of the username. Alternatives include creating a separate ID system of unique employee/student ID numbers to use as account names in place of actual users' names, and allowing users to nominate their preferred word sequence within an acceptable use policy.
Because duplicate usernames cannot exist within a domain, account name generation poses a significant challenge for large organizations that cannot be easily subdivided into separate domains, such as students in a public school system or university who must be able to use any computer across the network.
Shadow groups
In Microsoft's Active Directory, OUs do not confer access permissions, and objects placed within OUs are not automatically assigned access privileges based on their containing OU. This is a design limitation specific to Active Directory. Other competing directories such as Novell NDS are able to assign access privileges through object placement within an OU.
Active Directory requires a separate step for an administrator to assign an object in an OU as a member of a group also within that OU. Relying on OU location alone to determine access permissions is unreliable, because the object may not have been assigned to the group object for that OU.
A common workaround for an Active Directory administrator is to write a custom PowerShell or Visual Basic script to automatically create and maintain a user group for each OU in their directory. The scripts are run periodically to update the group to match the OU's account membership, but are unable to instantly update the security groups anytime the directory changes, as occurs in competing directories where security is directly implemented into the directory itself. Such groups are known as shadow groups. Once created, these shadow groups are selectable in place of the OU in the administrative tools.
Microsoft refers to shadow groups in the Server 2008 Reference documentation, but does not explain how to create them. There are no built-in server methods or console snap-ins for managing shadow groups.
The division of an organization's information infrastructure into a hierarchy of one or more domains and top-level OUs is a key decision. Common models are by business unit, by geographical location, by IT Service, or by object type and hybrids of these. OUs should be structured primarily to facilitate administrative delegation, and secondarily, to facilitate group policy application. Although OUs form an administrative boundary, the only true security boundary is the forest itself and an administrator of any domain in the forest must be trusted across all domains in the forest.
Partitions
The Active Directory database is organized in partitions, each holding specific object types and following a specific replication pattern. Microsoft often refers to these partitions as 'naming contexts'. The 'Schema' partition contains the definition of object classes and attributes within the Forest. The 'Configuration' partition contains information on the physical structure and configuration of the forest (such as the site topology). Both replicate to all domains in the Forest. The 'Domain' partition holds all objects created in that domain and replicates only within its domain.
Physical structure
Sites are physical (rather than logical) groupings defined by one or more IP subnets. AD also holds the definitions of connections, distinguishing low-speed (e.g., WAN, VPN) from high-speed (e.g., LAN) links. Site definitions are independent of the domain and OU structure and are common across the forest. Sites are used to control network traffic generated by replication and also to refer clients to the nearest domain controllers (DCs). Microsoft Exchange Server 2007 uses the site topology for mail routing. Policies can also be defined at the site level.
Physically, the Active Directory information is held on one or more peer domain controllers, replacing the NT PDC/BDC model. Each DC has a copy of the Active Directory. Servers joined to Active Directory that are not domain controllers are called Member Servers. A subset of objects in the domain partition replicate to domain controllers that are configured as global catalogs. Global catalog (GC) servers provide a global listing of all objects in the Forest.
Global Catalog servers replicate to themselves all objects from all domains and, hence, provide a global listing of objects in the forest. However, to minimize replication traffic and keep the GC's database small, only selected attributes of each object are replicated. This is called the partial attribute set (PAS). The PAS can be modified by modifying the schema and marking attributes for replication to the GC. Earlier versions of Windows used NetBIOS to communicate. Active Directory is fully integrated with DNS and requires TCP/IP—DNS. To be fully functional, the DNS server must support SRV resource records, also known as service records.
Replication
Active Directory synchronizes changes using multi-master replication. Replication by default is 'pull' rather than 'push', meaning that replicas pull changes from the server where the change was effected. The Knowledge Consistency Checker (KCC) creates a replication topology of site links using the defined sites to manage traffic. Intra-site replication is frequent and automatic as a result of change notification, which triggers peers to begin a pull replication cycle. Inter-site replication intervals are typically less frequent and do not use change notification by default, although this is configurable and can be made identical to intra-site replication.
Each link can have a 'cost' (e.g., DS3, T1, ISDN etc.) and the KCC alters the site link topology accordingly. Replication may occur transitively through several site links on same-protocol site link bridges, if the cost is low, although KCC automatically costs a direct site-to-site link lower than transitive connections. Site-to-site replication can be configured to occur between a bridgehead server in each site, which then replicates the changes to other DCs within the site. Replication for Active Directory zones is automatically configured when DNS is activated in the domain based by site.
Replication of Active Directory uses Remote Procedure Calls (RPC) over IP (RPC/IP). Between Sites SMTP can be used for replication, but only for changes in the Schema, Configuration, or Partial Attribute Set (Global Catalog) GCs. SMTP cannot be used for replicating the default Domain partition.
Implementation
In general, a network utilizing Active Directory has more than one licensed Windows server computer. Backup and restore of Active Directory is possible for a network with a single domain controller, but Microsoft recommends more than one domain controller to provide automatic failover protection of the directory. Domain controllers are also ideally single-purpose for directory operations only, and should not run any other software or role.
Certain Microsoft products such as SQL Server and Exchange can interfere with the operation of a domain controller, necessitating isolation of these products on additional Windows servers. Combining them can make configuration or troubleshooting of either the domain controller or the other installed software more difficult. A business intending to implement Active Directory is therefore recommended to purchase a number of Windows server licenses, to provide for at least two separate domain controllers, and optionally, additional domain controllers for performance or redundancy, a separate file server, a separate Exchange server, a separate SQL Server, and so forth to support the various server roles.
Physical hardware costs for the many separate servers can be reduced through the use of virtualization, although for proper failover protection, Microsoft recommends not running multiple virtualized domain controllers on the same physical hardware.
Database
The Active-Directory database, the directory store, in Windows 2000 Server uses the JET Blue-based Extensible Storage Engine (ESE98) and is limited to 16 terabytes and 2 billion objects (but only 1 billion security principals) in each domain controller's database. Microsoft has created NTDS databases with more than 2 billion objects. (NT4's Security Account Manager could support no more than 40,000 objects). Called NTDS.DIT, it has two main tables: the data table and the link table. Windows Server 2003 added a third main table for security descriptor single instancing.
Programs may access the features of Active Directory via the COM interfaces provided by Active Directory Service Interfaces.
Trusting
To allow users in one domain to access resources in another, Active Directory uses trusts.
Trusts inside a forest are automatically created when domains are created. The forest sets the default boundaries of trust, and implicit, transitive trust is automatic for all domains within a forest.
Terminology
One-way trust
One domain allows access to users on another domain, but the other domain does not allow access to users on the first domain.
Two-way trust
Two domains allow access to users on both domains.
Trusted domain
The domain that is trusted; whose users have access to the trusting domain.
Transitive trust
A trust that can extend beyond two domains to other trusted domains in the forest.
Intransitive trust
A one way trust that does not extend beyond two domains.
Explicit trust
A trust that an admin creates. It is not transitive and is one way only.
Cross-link trust
An explicit trust between domains in different trees or in the same tree when a descendant/ancestor (child/parent) relationship does not exist between the two domains.
Shortcut
Joins two domains in different trees, transitive, one- or two-way.
Forest trust
Applies to the entire forest. Transitive, one- or two-way.
Realm
Can be transitive or nontransitive (intransitive), one- or two-way.
External
Connect to other forests or non-AD domains. Nontransitive, one- or two-way.
PAM trust
A one-way trust used by Microsoft Identity Manager from a (possibly low-level) production forest to a (Windows Server 2016 functionality level) 'bastion' forest, which issues time-limited group memberships.
Management solutions
Microsoft Active Directory management tools include:
Active Directory Administrative Center (Introduced with Windows Server 2012 and above),
Active Directory Users and Computers,
Active Directory Domains and Trusts,
Active Directory Sites and Services,
ADSI Edit,
Local Users and Groups,
Active Directory Schema snap-ins for Microsoft Management Console (MMC),
SysInternals ADExplorer
These management tools may not provide enough functionality for efficient workflow in large environments. Some third-party solutions extend the administration and management capabilities. They provide essential features for a more convenient administration processes, such as automation, reports, integration with other services, etc.
Unix integration
Varying levels of interoperability with Active Directory can be achieved on most Unix-like operating systems (including Unix, Linux, Mac OS X or Java and Unix-based programs) through standards-compliant LDAP clients, but these systems usually do not interpret many attributes associated with Windows components, such as Group Policy and support for one-way trusts.
Third parties offer Active Directory integration for Unix-like platforms, including:
PowerBroker Identity Services, formerly Likewise (BeyondTrust, formerly Likewise Software) – Allows a non-Windows client to join Active Directory
ADmitMac (Thursby Software Systems)
Samba (free software under GPLv3) – Can act as a domain controller
The schema additions shipped with Windows Server 2003 R2 include attributes that map closely enough to RFC 2307 to be generally usable. The reference implementation of RFC 2307, nss_ldap and pam_ldap provided by PADL.com, support these attributes directly. The default schema for group membership complies with RFC 2307bis (proposed). Windows Server 2003 R2 includes a Microsoft Management Console snap-in that creates and edits the attributes.
An alternative option is to use another directory service as non-Windows clients authenticate to this while Windows Clients authenticate to AD. Non-Windows clients include 389 Directory Server (formerly Fedora Directory Server, FDS), ViewDS Identity Solutions - ViewDS v7.2 XML Enabled Directory and Sun Microsystems Sun Java System Directory Server. The latter two both being able to perform two-way synchronization with AD and thus provide a "deflected" integration.
Another option is to use OpenLDAP with its translucent overlay, which can extend entries in any remote LDAP server with additional attributes stored in a local database. Clients pointed at the local database see entries containing both the remote and local attributes, while the remote database remains completely untouched.
Administration (querying, modifying, and monitoring) of Active Directory can be achieved via many scripting languages, including PowerShell, VBScript, JScript/JavaScript, Perl, Python, and Ruby. Free and non-free AD administration tools can help to simplify and possibly automate AD management tasks.
Since October 2017 Amazon AWS offers integration with Microsoft Active Directory.
See also
AGDLP (implementing role based access controls using nested groups)
Apple Open Directory
Flexible single master operation
FreeIPA
List of LDAP software
System Security Services Daemon (SSSD)
Univention Corporate Server
References
External links
Microsoft Technet: White paper: Active Directory Architecture (Single technical document that gives an overview about Active Directory.)
Microsoft Technet: Detailed description of Active Directory on Windows Server 2003
Microsoft MSDN Library: [MS-ADTS]: Active Directory Technical Specification (part of the Microsoft Open Specification Promise)
Active Directory Application Mode (ADAM)
Microsoft MSDN: [AD-LDS]: Active Directory Lightweight Directory Services
Microsoft TechNet: [AD-LDS]: Active Directory Lightweight Directory Services
Microsoft MSDN: Active Directory Schema
Microsoft TechNet: Understanding Schema
Microsoft TechNet Magazine: Extending the Active Directory Schema
Microsoft MSDN: Active Directory Certificate Services
Microsoft TechNet: Active Directory Certificate Services
Directory services
Microsoft server technology
Windows components
Windows 2000 |
2923 | https://en.wikipedia.org/wiki/AIM%20%28software%29 | AIM (software) | AIM (AOL Instant Messenger) was an instant messaging and presence computer program created by AOL, which used the proprietary OSCAR instant messaging protocol and the TOC protocol to allow registered users to communicate in real time.
AIM was popular by the late 1990s, in United States and other countries, and was the leading instant messaging application in that region into the following decade. Teens and college students were known to use the messenger's away message feature to keep in touch with friends, often frequently changing their away message throughout a day or leaving a message up with one's computer left on to inform buddies of their ongoings, location, parties, thoughts, or jokes. AIM's popularity declined as AOL subscribers started decreasing and steeply towards the 2010s, as Gmail's Google Talk, SMS, and Internet social networks, like Facebook gained popularity. Its fall has often been compared with other once-popular Internet services, such as Myspace.
In June 2015, AOL was acquired by Verizon Communications. In June 2017, Verizon combined AOL and Yahoo into its subsidiary Oath Inc. (now called Yahoo). The company discontinued AIM as a service on December 15, 2017.
History
In May 1997, AIM was released unceremoniously as a stand-alone download for Microsoft Windows. AIM was an outgrowth of "online messages" in the original platform written in PL/1 on a Stratus computer by Dave Brown. At one time, the software had the largest share of the instant messaging market in North America, especially in the United States (with 52% of the total reported ). This does not include other instant messaging software related to or developed by AOL, such as ICQ and iChat.
During its heyday, its main competitors were ICQ (although AOL acquired ICQ in 1998), Yahoo! Messenger and MSN Messenger. AOL particularly had a rivalry or "chat war" with PowWow and Microsoft, starting in 1999. There were several attempts from Microsoft to simultaneously log into their own and AIM's protocol servers. AOL was not happy about this and started blocking MSN Messenger from being able to access AIM. This led to efforts by many companies to challenge the AOL and Time Warner merger on the grounds of antitrust behaviour, leading to the formation of the OpenNet Coalition.
Official mobile versions of AIM appeared as early as 2001 on Palm OS through the AOL application. Third-party applications allowed it to be used in 2002 for the Sidekick. A version for Symbian OS was announced in 2003 and others for BlackBerry and Windows Mobile
After 2012, stand-alone official AIM client software includes advertisements and was available for Microsoft Windows, Windows Mobile, Classic Mac OS, macOS, Android, iOS, BlackBerry OS.
Usage decline and product sunset
Around 2011, AIM started to lose popularity rapidly, partly due to the quick rise of Gmail and its built-in real-time Google Chat instant messenger integration in 2011 and because many people migrated to SMS or iMessages text messaging and later, social networking websites and apps for instant messaging, in particular, Facebook Messenger, which was released as a standalone application the same year. AOL made a partnership to integrate AIM messaging in Google Talk, and had a feature for AIM users to send SMS messages directly from AIM to any number, as well as for SMS users to send an IM to any AIM user.
As of June 2011, one source reported AOL Instant Messenger market share had collapsed to 0.73%. However, this number only reflected installed IM applications, and not active users. The engineers responsible for AIM claimed that they were unable to convince AOL management that free was the future.
On March 3, 2012, AOL ended employment of AIM's development staff while leaving it active and with help support still provided. On October 6, 2017, it was announced that the AIM service would be discontinued on December 15; however, a non-profit development team known as Wildman Productions started up a server for older versions of AOL Instant Messenger, known as AIM Phoenix.
The "AIM Man"
The AIM mascot was designed by JoRoan Lazaro and was implemented in the first release in 1997. This was a yellow stickman-like figure, often called the "Running Man". The mascot appeared on all AIM logos and most wordmarks, and always appeared at the top of the buddy list. AIM's popularity in the late 1990s and the 2000s led to the "Running Man" becoming a familiar brand on the Internet. After over 14 years, the iconic logo disappeared as part of the AIM rebranding in 2011. However, in August 2013, the "Running Man" returned.
In 2014, a Complex editor called it a "symbol of America". In April 2015, the Running Man was officially featured in the Virgin London Marathon, dressed by a person for the AOL-partnered Free The Children charity.
Protocol
The standard protocol that AIM clients used to communicate is called Open System for CommunicAtion in Realtime (OSCAR). Most AOL-produced versions of AIM and popular third party AIM clients use this protocol. However, AOL also created a simpler protocol called TOC that lacks many of OSCAR's features, but was sometimes used for clients that only require basic chat functionality. The TOC/TOC2 protocol specifications were made available by AOL, while OSCAR is a closed protocol that third parties had to reverse-engineer.
In January 2008, AOL introduced experimental Extensible Messaging and Presence Protocol (XMPP) support for AIM, allowing AIM users to communicate using the standardized, open-source XMPP. However, in March 2008, this service was discontinued. In May 2011, AOL started offering limited XMPP support. On March 1, 2017, AOL announced (via XMPP-login-time messages) that the AOL XMPP gateway would be desupported, effective March 28, 2017.
Privacy
For privacy regulations, AIM had strict age restrictions. AIM accounts are available only for people over the age of 13; children younger than that were not permitted access to AIM.
Under the AIM Privacy Policy, AOL had no rights to read or monitor any private communications between users. The profile of the user had no privacy.
In November 2002, AOL targeted the corporate industry with Enterprise AIM Services (EAS), a higher security version of AIM.
If public content was accessed, it could be used for online, print or broadcast advertising, etc. This was outlined in the policy and terms of service: "... you grant AOL, its parent, affiliates, subsidiaries, assigns, agents and licensees the irrevocable, perpetual, worldwide right to reproduce, display, perform, distribute, adapt and promote this Content in any medium". This allowed anything users posted to be used without a separate request for permission.
AIM's security was called into question. AOL stated that it had taken great pains to ensure that personal information will not be accessed by unauthorized members, but that it cannot guarantee that it will not happen.
AIM was different from other clients, such as Yahoo! Messenger, in that it did not require approval from users to be added to other users' buddy lists. As a result, it was possible for users to keep other unsuspecting users on their buddy list to see when they were online, read their status and away messages, and read their profiles. There was also a Web API to display one's status and away message as a widget on one's webpage. Though one could block a user from communicating with them and seeing their status, this did not prevent that user from creating a new account that would not automatically be blocked and therefore able to track their status. A more conservative privacy option was to select a menu feature that only allowed communication with users on one's buddy list; however, this option also created the side-effect of blocking all users who were not on one's buddy list.
Chat robots
AOL and various other companies supplied robots (bots) on AIM that could receive messages and send a response based on the bot's purpose. For example, bots could help with studying, like StudyBuddy. Some were made to relate to children and teenagers, like Spleak.
Others gave advice. The more useful chat bots had features like the ability to play games, get sport scores, weather forecasts or financial stock information. Users were able to talk to automated chat bots that could respond to natural human language. They were primarily put into place as a marketing strategy and for unique advertising options. It was used by advertisers to market products or build better consumer relations.
Before the inclusions of such bots, the other bots DoorManBot and AIMOffline provided features that were provided by AOL for those who needed it. ZolaOnAOL and ZoeOnAOL were short-lived bots that ultimately retired their features in favor of SmarterChild.
URI scheme
AOL Instant Messenger's installation process automatically installed an extra URI scheme ("protocol") handler into some Web browsers, so URIs beginning "aim:" could open a new AIM window with specified parameters. This was similar in function to the mailto: URI scheme, which created a new e-mail message using the system's default mail program. For instance, a webpage might have included a link like the following in its HTML source to open a window for sending a message to the AIM user notarealuser:
<a href="aim:goim?screenname=notarealuser">Send Message</a>
To specify a message body, the message parameter was used, so the link location would have looked like this:
aim:goim?screenname=notarealuser&message=This+is+my+message
To specify an away message, the message parameter was used, so the link location would have looked like this:
aim:goaway?message=Hello,+my+name+is+Bill
When placing this inside a URL link, an AIM user could click on the URL link and the away message "Hello, my name is Bill" would instantly become their away message.
To add a buddy, the addbuddy message was used, with the "screenname" parameter
aim:addbuddy?screenname=notarealuser
This type of link was commonly found on forum profiles to easily add contacts.
Vulnerabilities
AIM had security weaknesses that have enabled exploits to be created that used third-party software to perform malicious acts on users' computers. Although most were relatively harmless, such as being kicked off the AIM service, others performed potentially dangerous actions, such as sending viruses. Some of these exploits relied on social engineering to spread by automatically sending instant messages that contained a Uniform Resource Locator (URL) accompanied by text suggesting the receiving user click on it, an action which leads to infection, i.e., a trojan horse. These messages could easily be mistaken as coming from a friend and contain a link to a Web address that installed software on the user's computer to restart the cycle.
Users also have reported sudden additions of toolbars and advertisements from third parties in the newer version of AIM. Multiple complaints about the lack of control of third party involvement have caused many users to stop using the service.
Extra features
iPhone application
On March 6, 2008, during Apple Inc.'s iPhone SDK event, AOL announced that they would be releasing an AIM application for iPhone and iPod Touch users. The application was available for free from the App Store, but the company also provides a paid version, which displays no advertisements. Both were available from the App Store. The AIM client for iPhone and iPod Touch supported standard AIM accounts, as well as MobileMe accounts. There was also an express version of AIM accessible through the Safari browser on the iPhone and iPod Touch.
In 2011, AOL launched an overhaul of their Instant Messaging service. Included in the update was a brand new iOS application for iPhone and iPod Touch that incorporated all the latest features. A brand new icon was used for the application, featuring the new cursive logo for AIM. The user-interface was entirely redone for the features including: a new buddy list, group messaging, in-line photos and videos, as well as improved file-sharing.
Version 5.0.5, updated in March 2012, it supported more social stream features, much like Facebook and Twitter, as well as the ability to send voice messages up to 60 seconds long.
iPad application
On April 3, 2010, Apple released the first generation iPad. Along with this newly released device AOL released the AIM application for iPad. It was built entirely from scratch for the new version iOS with a specialized user-interface for the device. It supports geo location, Facebook status updates and chat, Myspace, Twitter, YouTube, Foursquare and many social networking platforms.
AIM Express
AIM Express ran in a pop-up browser window. It was intended for use by people who are unwilling or unable to install a standalone application or those at computers that lack the AIM application. AIM Express supported many of the standard features included in the stand-alone client, but did not provide advanced features like file transfer, audio chat, video conferencing, or buddy info. It was implemented in Adobe Flash. It was an upgrade to the prior AOL Quick Buddy, which was later available for older systems that cannot handle Express before being discontinued. Express and Quick Buddy were similar to MSN Web Messenger and Yahoo! Web Messenger. This web version evolved into AIM.com's web-based messenger.
AIM Pages
AIM Pages was a free website released in May 2006 by AOL in replacement of AIMSpace. Anyone who had an AIM user name and was at least 16 years of age could create their own web page (to display an online, dynamic profile) and share it with buddies from their AIM Buddy list.
Layout
AIM Pages included links to the email and Instant Message of the owner, along with a section listing the owners "buddies", which included AIM user names. It was possible to create modules in a Module T microformat. Video hosting sites like Netflix and YouTube could be added to ones AIM Page, as well as other sites like Amazon.com. It was also possible to insert HTML code.
The main focus of AIM Pages was the integration of external modules, like those listed above, into the AOL Instant Messenger experience.
Discontinuation
By late 2007, AIM Pages had been discontinued. After AIM Pages shutdown, links to AIM Pages were redirected to AOL Lifestream, AOL's new site aimed at collecting external modules in one place, independent of AIM buddies. AOL Lifestream was shut down February 24, 2017.
AIM for Mac
AOL released an all-new AIM for the Macintosh on September 29, 2008 and the final build on December 15, 2008. The redesigned AIM for Mac is a full universal binary Cocoa API application that supports both Tiger and Leopard — Mac OS X 10.4.8 (and above) or Mac OS X 10.5.3 (and above). On October 1, 2009, AOL released AIM 2.0 for Mac.
AIM real-time IM
This feature is available for AIM 7 and allows for a user to see what the other is typing as it is being done. It was developed and built with assistance from Trace Research and Development Centre at University of Wisconsin–Madison and Gallaudet University. The application provides visually impaired users the ability to convert messages from text (words) to speech. For the application to work users must have AIM 6.8 or higher, as it is not compatible with older versions of AIM software, AIM for Mac or iChat.
AIM to mobile (messaging to phone numbers)
This feature allows text messaging to a phone number (text messaging is less functional than instant messaging).
Discontinued features
AIM Phoneline
AIM Phoneline was a Voice over IP PC-PC, PC-Phone and Phone-to-PC service provided via the AIM application. It was also known to work with Apple's iChat Client. The service was officially closed to its customers on January 13, 2009. The closing of the free service caused the number associated with the service to be disabled and not transferable for a different service. AIM Phoneline website was recommending users switch to a new service named AIM Call Out, also discontinued now.
Launched on May 16, 2006, AIM Phoneline provided users the ability to have several local numbers, allowing AIM users to receive free incoming calls. The service allowed users to make calls to landlines and mobile devices through the use of a computer. The service, however, was only free for receiving and AOL charged users $14.95 a month for an unlimited calling plan. In order to use AIM Phoneline users had to install the latest free version of AIM Triton software and needed a good set of headphones with a boom microphone. It could take several days after a user signed up before it started working.
AIM Call Out
AIM Call Out is a discontinued Voice over IP PC-PC, PC-Phone and Phone-to-PC service provided by AOL via its AIM application that replaced the defunct AIM Phoneline service in November 2007. It did not depend on the AIM client and could be used with only an AIM screenname via the WebConnect feature or a dedicated SIP device. The AIM Call Out service was shut down on March 25, 2009.
Security
On November 4, 2014, AIM scored one out of seven points on the Electronic Frontier Foundation's secure messaging scorecard. AIM received a point for encryption during transit, but lost points because communications are not encrypted with a key to which the provider has no access, i.e., the communications are not end-to-end encrypted, users can't verify contacts' identities, past messages are not secure if the encryption keys are stolen, (i.e., the service does not provide forward secrecy), the code is not open to independent review, i.e., the code is not open-source), the security design is not properly documented, and there has not been a recent independent security audit. BlackBerry Messenger, Ebuddy XMS, Hushmail, Kik Messenger, Skype, Viber, and Yahoo! Messenger also scored one out of seven points.
See also
Comparison of cross-platform instant messaging clients
List of defunct instant messaging platforms
References
External links
1997 software
Android (operating system) software
Instant Messenger
BlackBerry software
Classic Mac OS instant messaging clients
Computer-related introductions in 1997
Cross-platform software
Defunct instant messaging clients
Instant messaging clients
Internet properties disestablished in 2017
IOS software
MacOS instant messaging clients
Online chat
Symbian software
Unix instant messaging clients
Videotelephony
Windows instant messaging clients |
3712 | https://en.wikipedia.org/wiki/Bell%20Labs | Bell Labs | Nokia Bell Labs, originally named Bell Telephone Laboratories (1925–1984),
then AT&T Bell Laboratories (1984–1996)
and Bell Labs Innovations (1996–2007),
is an American industrial research and scientific development company owned by Finnish company Nokia. With headquarters located in Murray Hill, New Jersey, the company operates several laboratories in the United States and around the world.
Researchers working at Bell Laboratories are credited with the development of radio astronomy, the transistor, the laser, the photovoltaic cell, the charge-coupled device (CCD), information theory, the Unix operating system, and the programming languages B, C, C++, S, SNOBOL, AWK, AMPL, and others. Nine Nobel Prizes have been awarded for work completed at Bell Laboratories.
Bell Labs had its origin in the complex corporate organization of the Bell Systems telephone conglomerate. In the late 19th century, the laboratory began as the Western Electric Engineering Department, located at 463 West Street in New York City. In 1925, after years of conducting research and development under Western Electric, a Bell subsidiary, the Engineering Department was reformed into Bell Telephone Laboratories and placed under the shared ownership of the American Telephone & Telegraph Company (AT&T) and Western Electric. In the 1960s the laboratory was moved to New Jersey. It was acquired by Nokia in 2016.
Origin and historical locations
Bell's personal research after the telephone
In 1880, when the French government awarded Alexander Graham Bell the Volta Prize of 50,000 francs (approximately US$10,000 at that time; about $ in January 2019's dollars) for the invention of the telephone, he used the award to fund the Volta Laboratory (Alexander Graham Bell Laboratory) in Washington, D.C. in collaboration with Sumner Tainter and Bell's cousin Chichester Bell. The laboratory was variously known as the Volta Bureau, the Bell Carriage House, the Bell Laboratory and the Volta Laboratory.
It focused on the analysis, recording, and transmission of sound. Bell used his considerable profits from the laboratory for further research and education to permit the "[increased] diffusion of knowledge relating to the deaf": resulting in the founding of the Volta Bureau (c. 1887) which was located at Bell's father's house at 1527 35th Street N.W. in Washington, D.C. Its carriage house became their headquarters in 1889.
In 1893, Bell constructed a new building close by at 1537 35th Street N.W., specifically to house the lab. This building was declared a National Historic Landmark in 1972.
After the invention of the telephone, Bell maintained a relatively distant role with the Bell System as a whole, but continued to pursue his own personal research interests.
Early antecedent
The Bell Patent Association was formed by Alexander Graham Bell, Thomas Sanders, and Gardiner Hubbard when filing the first patents for the telephone in 1876.
Bell Telephone Company, the first telephone company, was formed a year later. It later became a part of the American Bell Telephone Company.
American Telephone & Telegraph Company (AT&T) and its own subsidiary company, took control of American Bell and the Bell System by 1889.
American Bell held a controlling interest in Western Electric (which was the manufacturing arm of the business) whereas AT&T was doing research into the service providers.
In 1884, the American Bell Telephone Company created the Mechanical Department from the Electrical and Patent Department formed a year earlier.
Formal organization and location changes
In 1896, Western Electric bought property at 463 West Street to station their manufacturers and engineers who had been supplying AT&T with their product. This included everything from telephones, telephone exchange switches, and transmission equipment.
On January 1, 1925, Bell Telephone Laboratories, Inc. was organized to consolidate the development and research activities in the communication field and allied sciences for the Bell System. Ownership was evenly shared between Western Electric and AT&T. The new company had existing personnel of 3600 engineers, scientists, and support staff. In addition to the existing research facilities of 400,000 square feet of space, its space was extended with a new building on about one quarter of a city block.
The first chairman of the board of directors was John J. Carty, the vice-president of AT&T, and the first president was Frank B. Jewett, also a board member, who stayed there until 1940. The operations were directed by E. B. Craft, executive vice-president, and formerly chief engineer at Western Electric.
By the early 1940s, Bell Labs engineers and scientists had begun to move to other locations away from the congestion and environmental distractions of New York City, and in 1967 Bell Laboratories headquarters was officially relocated to Murray Hill, New Jersey.
Among the later Bell Laboratories locations in New Jersey were Holmdel, Crawford Hill, the Deal Test Site, Freehold, Lincroft, Long Branch, Middletown, Neptune, Princeton, Piscataway, Red Bank, Chester, and Whippany. Of these, Murray Hill and Crawford Hill remain in existence (the Piscataway and Red Bank locations were transferred to and are now operated by Telcordia Technologies and the Whippany site was purchased by Bayer).
The largest grouping of people in the company was in Illinois, at Naperville-Lisle, in the Chicago area, which had the largest concentration of employees (about 11,000) prior to 2001. There also were groups of employees in Indianapolis, Indiana; Columbus, Ohio; North Andover, Massachusetts; Allentown, Pennsylvania; Reading, Pennsylvania; and Breinigsville, Pennsylvania; Burlington, North Carolina (1950s–1970s, moved to Greensboro 1980s) and Westminster, Colorado. Since 2001, many of the former locations have been scaled down or closed.
The Holmdel site, a 1.9 million square foot structure set on 473 acres, was closed in 2007. The mirrored-glass building was designed by Eero Saarinen. In August 2013, Somerset Development bought the building, intending to redevelop it into a mixed commercial and residential project. A 2012 article expressed doubt on the success of the newly named Bell Works site, but several large tenants had announced plans to move in through 2016 and 2017.
Building Complex Location information, past and present
Crawford Hill, (HOH) Crawfords Corner Road, Holmdel, NJ (built 1930s, currently as exhibit and building sold, Horn antenna used for "Big Bang" theory.)
Holmdel, (HO) 101 Crawfords Corner, Holmdel, NJ (built 1959–1962, older structures in the 1920s, currently as private building called Bell Labs Works-Discovered extraterrestrial radio emissions, undersea cable research, satellite transmissions systems Telstar 3 and 4)
Indian Hill, (IH) 2000 Naperville Road, Naperville, Il (built 1966-Currently Nokia-Developed switching technology and systems.)
Murray Hill, (MH) 600 Murray Hill, Murray Hill, NJ (built 1941–1945, currently as Nokia, Developed transistor, UNIX operating system and C programming language, Anechoic Chamber, several building sections demolished)
Short Hills, (HL) 101-103 JFK Parkway, Short Hills, NJ (Various departments such as Accounts Payable, IT Purchasing, HR Personnel, Payroll, Telecom and the Government group, and Unix Administration Systems Computer Center. Buildings exist without the overhead walkway between the two buildings and two different companies are located from banking and business analytics.)
Summit, (SF) 190 River Road, Summit, NJ (building was part of the UNIX Software Operations and became UNIX System Laboratories, Inc. In December 1991, USL combined with Novell. Location is a banking company.)
West St, ( ) 463 West Street, New York, NY (build 1898, 1925 until 1966 as Bell Labs headquarters, experimental talking movies, wave nature of matter, radar)
Whippany, (WH) 67 Whippany Road, Whippany, NJ (built 1920s, demolished and portion building as Bayer, Performed military research and development, research and development in radar, in guidance for the Nike missile, and in underwater sound, Telstar 1, wireless technologies.)
Bell Labs locations listed in 1974 corporate directory
Allentown-Allentown, PA
Atlanta-Norcross, GA
Centennial Park-Piscataway, NJ
Chester-Chester, NJ
Columbus-Columbus, OH
Crawford Hill-Holmdel, NJ
Denver-Denver, CO
Grand Forks-MSR-Cavalier, ND [Missile Site Radar (MSR) Site]
Grand Forks-PAR-Cavalier, ND [Perimeter Acquisition Radar (PAR) Site]
Guilford Center-Greensboro, NC
Holmdel-Holmdel, NJ
Indianapolis-Indianapolis, IN
Indian Hill-Naperville, IL
Kwajalein-San Francisco, CA
Madison-Madison, NJ
Merrimack Valley-North Andover, MA
Murray Hill-Murray Hill, NJ
Raritan River Center-Piscataway, NJ
Reading-Reading, PA
Union-Union, NJ
Warren Service Center-Warren, NJ
Whippany-Whippany, NJ
Discoveries and developments
Bell Laboratories was, and is, regarded by many as the premier research facility of its type, developing a wide range of revolutionary technologies, including radio astronomy, the transistor, the laser, information theory, the operating system Unix, the programming languages C and C++, solar cells, the charge-coupled device (CCD), and many other optical, wireless, and wired communications technologies and systems.
1920s
In 1926, the laboratories invented an early example synchronous-sound motion picture system, in competition with Fox Movietone and DeForest Phonofilm.
In 1924, Bell Labs physicist Walter A. Shewhart proposed the control chart as a method to determine when a process was in a state of statistical control. Shewhart's methods were the basis for statistical process control (SPC): the use of statistically based tools and techniques to manage and improve processes. This was the origin of the modern quality movement, including Six Sigma.
In 1927, a Bell team headed by Herbert E. Ives successfully transmitted long-distance 128-line television images of Secretary of Commerce Herbert Hoover from Washington to New York. In 1928 the thermal noise in a resistor was first measured by John B. Johnson, and Harry Nyquist provided the theoretical analysis; this is now termed Johnson noise. During the 1920s, the one-time pad cipher was invented by Gilbert Vernam and Joseph Mauborgne at the laboratories. Bell Labs' Claude Shannon later proved that it is unbreakable.
1930s
In 1931, a foundation for radio astronomy was laid by Karl Jansky during his work investigating the origins of static on long-distance shortwave communications. He discovered that radio waves were being emitted from the center of the galaxy. In 1931 and 1932, experimental high fidelity, long playing, and even stereophonic recordings were made by the labs of the Philadelphia Orchestra, conducted by Leopold Stokowski. In 1933, stereo signals were transmitted live from Philadelphia to Washington, D.C. In 1937, the vocoder, an electronic speech compression device, or codec, and the Voder, the first electronic speech synthesizer, were developed and demonstrated by Homer Dudley, the Voder being demonstrated at the 1939 New York World's Fair. Bell researcher Clinton Davisson shared the Nobel Prize in Physics with George Paget Thomson for the discovery of electron diffraction, which helped lay the foundation for solid-state electronics.
1940s
In the early 1940s, the photovoltaic cell was developed by Russell Ohl. In 1943, Bell developed SIGSALY, the first digital scrambled speech transmission system, used by the Allies in World War II. The British wartime codebreaker Alan Turing visited the labs at this time, working on speech encryption and meeting Claude Shannon.
Bell Labs Quality Assurance Department gave the world and the United States such statisticians as Walter A. Shewhart, W. Edwards Deming, Harold F. Dodge, George D. Edwards, Harry Romig, R. L. Jones, Paul Olmstead, E.G.D. Paterson, and Mary N. Torrey. During World War II, Emergency Technical Committee – Quality Control, drawn mainly from Bell Labs' statisticians, was instrumental in advancing Army and Navy ammunition acceptance and material sampling procedures.
In 1947, the transistor, probably the most important invention developed by Bell Laboratories, was invented by John Bardeen, Walter Houser Brattain, and William Bradford Shockley (and who subsequently shared the Nobel Prize in Physics in 1956). In 1947, Richard Hamming invented Hamming codes for error detection and correction. For patent reasons, the result was not published until 1950. In 1948, "A Mathematical Theory of Communication", one of the founding works in information theory, was published by Claude Shannon in the Bell System Technical Journal. It built in part on earlier work in the field by Bell researchers Harry Nyquist and Ralph Hartley, but it greatly extended these. Bell Labs also introduced a series of increasingly complex calculators through the decade. Shannon was also the founder of modern cryptography with his 1949 paper Communication Theory of Secrecy Systems.
Calculators
Model I: A complex number calculator, completed in 1939 and put into operation in 1940, for doing calculations of complex numbers.
Model II: Relay Computer / Relay Interpolator, September 1943, for interpolating data points of flight profiles (needed for performance testing of a gun director). This model introduced error detection (self checking).
Model III: Ballistic Computer, June 1944, for calculations of ballistic trajectories
Model IV: Error Detector Mark II, March 1945, improved ballistic computer
Model V: General purpose electromechanical computer, of which two were built, July 1946 and February 1947
Model VI: 1949, an enhanced Model V
1950s
In 1952, William Gardner Pfann revealed the method of zone melting which enabled semiconductor purification and level doping.
The 1950s also saw developments based upon information theory. The central development was binary code systems. Efforts concentrated on the prime mission of supporting the Bell System with engineering advances, including the N-carrier system. TD microwave radio relay, direct distance dialing, E-repeater, wire spring relay, and the Number Five Crossbar Switching System.
In 1953, Maurice Karnaugh developed the Karnaugh map, used for managing of Boolean algebraic expressions. In 1954, the first modern solar cell was invented at Bell Laboratories. In 1956 TAT-1, the first transatlantic communications cable, was laid between Scotland and Newfoundland in a joint effort by AT&T, Bell Laboratories, and British and Canadian telephone companies. In 1957, Max Mathews created MUSIC, one of the first computer programs to play electronic music. Robert C. Prim and Joseph Kruskal developed new greedy algorithms that revolutionized computer network design. In 1958, a technical paper by Arthur Schawlow and Charles Hard Townes first described the laser. In 1959, Mohamed M. Atalla and Dawon Kahng invented the metal-oxide semiconductor field-effect transistor (MOSFET). The MOSFET has achieved electronic hegemony and sustains the large-scale integration (LSI) of circuits underlying today's information society.
1960s
In December 1960, Ali Javan and his associates William Bennett and Donald Heriot successfully operated the first gas laser, the first continuous-light laser, operating at an unprecedented accuracy and color purity. In 1962, the electret microphone was invented by Gerhard M. Sessler and James Edward Maceo West. Also in 1962, John R. Pierce's vision of communications satellites was realized by the launch of Telstar. In 1964, the Carbon dioxide laser was invented by Kumar Patel and the discovery/operation of the Nd:YAG laser was demonstrated by J.E. Geusic et al.. Experiments by Myriam Sarachik provided the first data that confirmed the Kondo effect. The research of Philip W. Anderson into electronic structure of magnetic and disordered systems led to improved understanding of metals and insulators for which he was awarded the Nobel Prize for Physics in 1977. In 1965, Penzias and Wilson discovered the cosmic microwave background, for which they were awarded the Nobel Prize in Physics in 1978. Frank W. Sinden, Edward E. Zajac, Kenneth C. Knowlton, and A. Michael Noll made computer-animated movies during the early to mid-1960s. Ken C. Knowlton invented the computer animation language BEFLIX. The first digital computer art was created in 1962 by Noll. In 1966, Orthogonal frequency-division multiplexing (OFDM), a key technology in wireless services, was developed and patented by R. W. Chang. In 1968, Molecular beam epitaxy was developed by J.R. Arthur and A.Y. Cho; molecular beam epitaxy allows semiconductor chips and laser matrices to be manufactured one atomic layer at a time. In 1969, Dennis Ritchie and Ken Thompson created the computer operating system UNIX for the support of telecommunication switching systems as well as general purpose computing. From 1969 to 1971, Aaron Marcus, the first graphic designer involved with computer graphics, researched, designed, and programmed a prototype interactive page-layout system for the Picturephone. In 1969, the charge-coupled device (CCD) was invented by Willard Boyle and George E. Smith, for which they were awarded the Nobel Prize in Physics in 2009. In the 1960s, the New York City site was sold and became the Westbeth Artists Community complex.
1970s
The 1970s and 1980s saw more and more computer-related inventions at the Bell Laboratories as part of the personal computing revolution. In 1972, Dennis Ritchie developed the compiled programming language C as a replacement for the interpreted language B which was then used in a worse is better rewrite of UNIX. Also, the language AWK was designed and implemented by Alfred Aho, Peter Weinberger, and Brian Kernighan of Bell Laboratories. In 1972, Marc Rochkind invented the Source Code Control System.
In 1970, A. Michael Noll invented a tactile, force-feedback system, coupled with interactive stereoscopic computer display. In 1971, an improved task priority system for computerized telephone exchange switching systems for telephone traffic was invented by Erna Schneider Hoover, who received one of the first software patents for it. In 1976, Optical fiber systems were first tested in Georgia and in 1980, the first single-chip 32-bit microprocessor, the Bellmac 32A was demonstrated. It went into production in 1982.
The 1970s also saw a major central office technology evolve from crossbar electromechanical relay-based technology and discrete transistor logic to Bell Labs-developed thick film hybrid and transistor–transistor logic (TTL), stored program-controlled switching systems; 1A/#4 TOLL Electronic Switching Systems (ESS) and 2A Local Central Offices produced at the Bell Labs Naperville and Western Electric Lisle, Illinois facilities. This technology evolution dramatically reduced floor space needs. The new ESS also came with its own diagnostic software that needed only a switchman and several frame technicians to maintain.
1980s
In 1980, the TDMA and CDMA digital cellular telephone technology was patented. In 1982, Fractional quantum Hall effect was discovered by Horst Störmer and former Bell Laboratories researchers Robert B. Laughlin and Daniel C. Tsui; they consequently won a Nobel Prize in 1998 for the discovery. In 1985, the programming language C++ had its first commercial release. Bjarne Stroustrup started developing C++ at Bell Laboratories in 1979 as an extension to the original C language.
In 1984, the first photoconductive antennas for picosecond electromagnetic radiation were demonstrated by Auston and others. This type of antenna became an important component in terahertz time-domain spectroscopy. In 1984, Karmarkar's algorithm for linear programming was developed by mathematician Narendra Karmarkar. Also in 1984, a divestiture agreement signed in 1982 with the American Federal government forced the break-up of AT&T: Bellcore (now iconectiv) was split off from Bell Laboratories to provide the same R&D functions for the newly created local exchange carriers. AT&T also was limited to using the Bell trademark only in association with Bell Laboratories. Bell Telephone Laboratories, Inc. became a wholly owned company of the new AT&T Technologies unit, the former Western Electric. The 5ESS Switch was developed during this transition. In 1985, laser cooling was used to slow and manipulate atoms by Steven Chu and team. In 1985, the modeling language A Mathematical Programming Language AMPL was developed by Robert Fourer, David M. Gay and Brian Kernighan at Bell Laboratories. Also in 1985, Bell Laboratories was awarded the National Medal of Technology "For contribution over decades to modern communication systems". During the 1980s, the operating system Plan 9 from Bell Labs was developed extending the UNIX model. Also, the Radiodrum, an electronic music instrument played in three space dimensions was invented. In 1988, TAT-8 became the first transatlantic fiber-optic cable. Bell Labs in Freehold, NJ developed the 1.3-micron fiber, cable, splicing, laser detector, and 280 Mbit/s repeater for 40,000 telephone-call capacity.
Arthur Ashkin invented optical tweezers that grab particles, atoms, viruses and other living cells with their laser beam fingers. A major breakthrough came in 1987, when Ashkin used the tweezers to capture living bacteria without harming them. He immediately began studying biological systems using the optical tweezers, which are now widely used to investigate the machinery of life. He was awarded the Nobel Prize in Physics (2018) for his work involving optical tweezers and their application to biological systems.
1990s
In the early 1990s, approaches to increase modem speeds to 56K were explored at Bell Labs, and early patents were filed in 1992 by Ender Ayanoglu, Nuri R. Dagdeviren and their colleagues. In 1994, the quantum cascade laser was invented by Federico Capasso, Alfred Cho, Jerome Faist and their collaborators. Also in 1994, Peter Shor devised his quantum factorization algorithm. In 1996, SCALPEL electron lithography, which prints features atoms wide on microchips, was invented by Lloyd Harriott and his team. The operating system Inferno, an update of Plan 9, was created by Dennis Ritchie with others, using the then-new concurrent programming language Limbo. A high performance database engine (Dali) was developed which became DataBlitz in its product form.
In 1996, AT&T spun off Bell Laboratories, along with most of its equipment manufacturing business, into a new company named Lucent Technologies. AT&T retained a small number of researchers who made up the staff of the newly created AT&T Labs.
In 1997, the smallest then-practical transistor (60 nanometers, 182 atoms wide) was built. In 1998, the first optical router was invented.
2000s
2000 was an active year for the Laboratories, in which DNA machine prototypes were developed; progressive geometry compression algorithm made widespread 3-D communication practical; the first electrically powered organic laser invented; a large-scale map of cosmic dark matter was compiled, and the F-15 (material), an organic material that makes plastic transistors possible, was invented.
In 2002, physicist Jan Hendrik Schön was fired after his work was found to contain fraudulent data. It was the first known case of fraud at Bell Labs.
In 2003, the New Jersey Institute of Technology Biomedical Engineering Laboratory was created at Murray Hill, New Jersey.
In 2005, Jeong H. Kim, former President of Lucent's Optical Network Group, returned from academia to become the President of Bell Laboratories.
In April 2006, Bell Laboratories' parent company, Lucent Technologies, signed a merger agreement with Alcatel. On December 1, 2006, the merged company, Alcatel-Lucent, began operations. This deal raised concerns in the United States, where Bell Laboratories works on defense contracts. A separate company, LGS Innovations, with an American board was set up to manage Bell Laboratories' and Lucent's sensitive U.S. government contracts. In March 2019, LGS Innovations was purchased by CACI.
In December 2007, it was announced that the former Lucent Bell Laboratories and the former Alcatel Research and Innovation would be merged into one organization under the name of Bell Laboratories. This is the first period of growth following many years during which Bell Laboratories progressively lost manpower due to layoffs and spin-offs making the company shut down briefly.
As of July 2008, however, only four scientists remained in physics research, according to a report by the scientific journal Nature.
On August 28, 2008, Alcatel-Lucent announced it was pulling out of basic science, material physics, and semiconductor research, and it will instead focus on more immediately marketable areas, including networking, high-speed electronics, wireless networks, nanotechnology and software.
In 2009, Willard Boyle and George Smith were awarded the Nobel Prize in Physics for the invention and development of the charge-coupled device (CCD).
2010s
Gee Rittenhouse, former Head of Research, returned from his position as chief operating officer of Alcatel-Lucent's Software, Services, and Solutions business in February 2013, to become the 12th President of Bell Labs.
On November 4, 2013, Alcatel-Lucent announced the appointment of Marcus Weldon as President of Bell Labs. His stated charter was to return Bell Labs to the forefront of innovation in Information and communications technology by focusing on solving the key industry challenges, as was the case in the great Bell Labs innovation eras in the past.
In July 2014, Bell Labs announced it had broken "the broadband Internet speed record" with a new technology dubbed XG-FAST that promises 10 gigabits per second transmission speeds.
In 2014, Eric Betzig shared the Nobel Prize in Chemistry for his work in super-resolved fluorescence microscopy which he began pursuing while at Bell Labs in the Semiconductor Physics Research Department.
On April 15, 2015, Nokia agreed to acquire Alcatel-Lucent, Bell Labs' parent company, in a share exchange worth $16.6 billion. Their first day of combined operations was January 14, 2016.
In September 2016, Nokia Bell Labs, along with Technische Universität Berlin, Deutsche Telekom T-Labs and the Technical University of Munich achieved a data rate of one terabit per second by improving transmission capacity and spectral efficiency in an optical communications field trial with a new modulation technique.
In 2018, Arthur Ashkin shared the Nobel Prize in Physics for his work on "the optical tweezers and their application to biological systems" which was developed at Bell Labs in the 1980s.
2020s
In 2020, Alfred Aho and Jeffrey Ullman shared the Turing Award for their work on Compilers.
Nobel Prizes, Turing Awards, Emmy Awards, Grammy Award, and Academy Award
Nine Nobel Prizes have been awarded for work completed at Bell Laboratories.
1937: Clinton J. Davisson shared the Nobel Prize in Physics for demonstrating the wave nature of matter.
1956: John Bardeen, Walter H. Brattain, and William Shockley received the Nobel Prize in Physics for inventing the first transistors.
1977: Philip W. Anderson shared the Nobel Prize in Physics for developing an improved understanding of the electronic structure of glass and magnetic materials.
1978: Arno A. Penzias and Robert W. Wilson shared the Nobel Prize in Physics. Penzias and Wilson were cited for their discovering cosmic microwave background radiation, a nearly uniform glow that fills the Universe in the microwave band of the radio spectrum.
1997: Steven Chu shared the Nobel Prize in Physics for developing methods to cool and trap atoms with laser light.
1998: Horst Störmer, Robert Laughlin, and Daniel Tsui, were awarded the Nobel Prize in Physics for discovering and explaining the fractional quantum Hall effect.
2009: Willard S. Boyle, George E. Smith shared the Nobel Prize in Physics with Charles K. Kao. Boyle and Smith were cited for inventing charge-coupled device (CCD) semiconductor imaging sensors.
2014: Eric Betzig shared the Nobel Prize in Chemistry for his work in super-resolved fluorescence microscopy which he began pursuing while at Bell Labs.
2018: Arthur Ashkin shared the Nobel Prize in Physics for his work on "the optical tweezers and their application to biological systems" which was developed at Bell Labs.
The Turing Award has been won five times by Bell Labs researchers.
1968: Richard Hamming for his work on numerical methods, automatic coding systems, and error-detecting and error-correcting codes.
1983: Ken Thompson and Dennis Ritchie for their work on operating system theory, and for developing Unix.
1986: Robert Tarjan with John Hopcroft, for fundamental achievements in the design and analysis of algorithms and data structures.
2018: Yann LeCun and Yoshua Bengio shared the Turing Award with Geoffrey Hinton for their work in Deep Learning.
2020: Alfred Aho and Jeffrey Ullman shared the Turing Award for their work on Compilers.
The Emmy Award has been won five times by Bell Labs. One under Lucent Technologies, one under Alcatel-Lucent, and three under Nokia.
1997: Primetime Engineering Emmy Award for "work on digital television as part of the HDTV Grand Alliance."
2013: Technology and Engineering Emmy for its "Pioneering Work in Implementation and Deployment of Network DVR"
2016: Technology & Engineering Emmy Award for the pioneering invention and deployment of fiber-optic cable.
2020: Technology & Engineering Emmy Award for the CCD (charge-coupled device) was crucial in the development of television, allowing images to be captured digitally for recording transmission.
2021: Technology & Engineering Emmy Award for the "ISO Base Media File Format standardization, in which our multimedia research unit has played a major role."
The inventions of fiber-optics and research done in digital television and media File Format were under former AT&T Bell Labs ownership.
The Grammy Award has been won once by Bell Labs under Alcatel-Lucent.
2006: Technical GRAMMY® Award for outstanding technical contributions to the recording field.
The Academy Award has been won once by E. C. Wente and Bell Labs.
1937: Scientific or Technical Award (Class II) for their multi-cellular high-frequency horn and receiver.
Presidents
Notable alumni
__ Nobel Prize
__ Turing Award
Programs
On May 20, 2014, Bell Labs announced the Bell Labs Prize, a competition for innovators to offer proposals in information and communication technologies, with cash awards of up to $100,000 for the grand prize.
Bell Labs Technology Showcase
The Murray Hill campus features a exhibit, the Bell Labs Technology Showcase, showcasing the technological discoveries and developments at Bell Labs. The exhibit is located just off the main lobby and is open to the public.
See also
Bell Labs Holmdel Complex
Bell Labs Technical Journal—Published scientific journal of Bell Laboratories (1996–present)
Bell System Technical Journal—Published scientific journal of Bell Laboratories (1922–1983)
Bell Labs Record
Industrial laboratory
George Stibitz—Bell Laboratories engineer—"father of the modern digital computer"
History of mobile phones—Bell Laboratories conception and development of cellular phones
High speed photography & Wollensak—Fastax high speed (rotating prism) cameras developed by Bell Labs
Knolls Atomic Power Laboratory
Simplified Message Desk Interface
Sound film—Westrex sound system for cinema films developed by Bell Labs
TWX Magazine—A short-lived trade periodical published by Bell Laboratories (1944–1952)
Walter A. Shewhart—Bell Laboratories engineer—"father of statistical quality control"
"Worse is Better"—A software design philosophy also called "The New Jersey Style" under which UNIX and C were supposedly developed
Experiments in Art and Technology—A collaboration between artists and Bell Labs engineers & scientists to create new forms of art.
References
Further reading
Martin, Douglas. Ian M. Ross, a President at Bell Labs, Dies at 85, The New York Times, March 16, 2013, p. A23
Gleick, James. The Information: A History, a Theory, a Flood. Vintage Books, 2012, 544 pages. .
External links
Bell Works, the re-imagining of the historic former Bell Labs building in Holmdel, New Jersey
Timeline of discoveries as of 2006 (https://www.bell-labs.com/timeline)
Bell Labs' Murray Hill anechoic chamber
Bell Laboratories and the Development of Electrical Recording
History of Bell Telephone Laboratories, Inc. (from Bell System Memorial)
The Idea Factory a video interview with Jon Gertner, author of "The Idea Factory: Bell Labs and the Great Age of American Innovation, by Dave Iverson of KQED-FM Public Radio, San Francisco
Alcatel-Lucent
Bell System
Berkeley Heights, New Jersey
Companies based in Union County, New Jersey
Computer science institutes in the United States
Computer science research organizations
Former AT&T subsidiaries
History of telecommunications in the United States
National Medal of Technology recipients
New Providence, New Jersey
Nokia
Research institutes in the United States
Science and technology in New Jersey |
3742 | https://en.wikipedia.org/wiki/Bluetooth | Bluetooth | Bluetooth is a short-range wireless technology standard that is used for exchanging data between fixed and mobile devices over short distances using UHF radio waves in the ISM bands, from 2.402 to 2.48GHz, and building personal area networks (PANs). It is mainly used as an alternative to wire connections, to exchange files between nearby portable devices and connect cell phones and music players with wireless headphones. In the most widely used mode, transmission power is limited to 2.5 milliwatts, giving it a very short range of up to .
Bluetooth is managed by the Bluetooth Special Interest Group (SIG), which has more than 35,000 member companies in the areas of telecommunication, computing, networking, and consumer electronics. The IEEE standardized Bluetooth as IEEE 802.15.1, but no longer maintains the standard. The Bluetooth SIG oversees development of the specification, manages the qualification program, and protects the trademarks. A manufacturer must meet Bluetooth SIG standards to market it as a Bluetooth device. A network of patents apply to the technology, which are licensed to individual qualifying devices. , Bluetooth integrated circuit chips ship approximately million units annually. By 2017, there were 3.6 billion Bluetooth devices shipping annually and the shipments were expected to continue increasing at about 12% a year.
Etymology
The name "Bluetooth" was proposed in 1997 by Jim Kardach of Intel, one of the founders of the Bluetooth SIG. The name was inspired by a conversation with Sven Mattisson who related Scandinavian history through tales from Frans G. Bengtsson's The Long Ships, a historical novel about Vikings and the 10th-century Danish king Harald Bluetooth. Upon discovering a picture of the Harald Bluetooth rune stone in the book Gwyn Jones's A History of the Vikings, Jim proposed Bluetooth as the codename for the short-range wireless program which is now called Bluetooth.
According to Bluetooth's official website,
Bluetooth is the Anglicised version of the Scandinavian Blåtand/Blåtann (or in Old Norse blátǫnn). It was the epithet of King Harald Bluetooth, who united the disparate
Danish tribes into a single kingdom; Kardach chose the name to imply that Bluetooth similarly unites communication protocols.
Logo
The Bluetooth logo is a bind rune merging the Younger Futhark runes (ᚼ, Hagall) and (ᛒ, Bjarkan), Harald's initials.
History
The development of the "short-link" radio technology, later named Bluetooth, was initiated in 1989 by Nils Rydbeck, CTO at Ericsson Mobile in Lund, Sweden. The purpose was to develop wireless headsets, according to two inventions by Johan Ullman, and . Nils Rydbeck tasked Tord Wingren with specifying and Dutchman Jaap Haartsen and Sven Mattisson with developing. Both were working for Ericsson in Lund. Principal design and development began in 1994 and by 1997 the team had a workable solution. From 1997 Örjan Johansson became the project leader and propelled the technology and standardization.
In 1997, Adalio Sanchez, then head of IBM ThinkPad product R&D, approached Nils Rydbeck about collaborating on integrating a mobile phone into a ThinkPad notebook. The two assigned engineers from Ericsson and IBM to study the idea. The conclusion was that power consumption on cellphone technology at that time was too high to allow viable integration into a notebook and still achieve adequate battery life. Instead, the two companies agreed to integrate Ericsson's short-link technology on both a ThinkPad notebook and an Ericsson phone to accomplish the goal. Since neither IBM ThinkPad notebooks nor Ericsson phones were the market share leaders in their respective markets at that time, Adalio Sanchez and Nils Rydbeck agreed to make the short-link technology an open industry standard to permit each player maximum market access. Ericsson contributed the short-link radio technology, and IBM contributed patents around the logical layer. Adalio Sanchez of IBM then recruited Stephen Nachtsheim of Intel to join and then Intel also recruited Toshiba and Nokia. In May 1998, the Bluetooth SIG was launched with IBM and Ericsson as the founding signatories and a total of five members: Ericsson, Intel, Nokia, Toshiba and IBM.
The first consumer Bluetooth device was launched in 1999. It was a hands-free mobile headset that earned the "Best of show Technology Award" at COMDEX.
The first Bluetooth mobile phone was the Ericsson T36 but it was the revised T39 model that actually made it to store shelves in 2001. In parallel, IBM introduced the IBM ThinkPad A30 in October 2001 which was the first notebook with integrated Bluetooth.
Bluetooth's early incorporation into consumer electronics products continued at Vosi Technologies in Costa Mesa, California, USA, initially overseen by founding members Bejan Amini and Tom Davidson. Vosi Technologies had been created by real estate developer Ivano Stegmenga, with United States Patent 608507, for communication between a cellular phone and a vehicle's audio system. At the time, Sony/Ericsson had only a minor market share in the cellular phone market, which was dominated in the US by Nokia and Motorola. Due to ongoing negotiations for an intended licensing agreement with Motorola beginning in the late 1990s, Vosi could not publicly disclose the intention, integration and initial development of other enabled devices which were to be the first "Smart Home" internet connected devices.
Vosi needed a means for the system to communicate without a wired connection from the vehicle to the other devices in the network. Bluetooth was chosen, since WiFi was not yet readily available or supported in the public market. Vosi had begun to develop the Vosi Cello integrated vehicular system and some other internet connected devices, one of which was intended to be a table-top device named the Vosi Symphony, networked with Bluetooth. Through the negotiations with Motorola, Vosi introduced and disclosed its intent to integrate Bluetooth in its devices. In the early 2000s a legal battle ensued between Vosi and Motorola, which indefinitely suspended release of the devices. Later, Motorola implemented it in their devices which initiated the significant propagation of Bluetooth in the public market due to its large market share at the time.
In 2012, Jaap Haartsen was nominated by the European Patent Office for the European Inventor Award.
Implementation
Bluetooth operates at frequencies between 2.402 and 2.480GHz, or 2.400 and 2.4835GHz, including guard bands 2MHz wide at the bottom end and 3.5MHz wide at the top. This is in the globally unlicensed (but not unregulated) industrial, scientific and medical (ISM) 2.4GHz short-range radio frequency band. Bluetooth uses a radio technology called frequency-hopping spread spectrum. Bluetooth divides transmitted data into packets, and transmits each packet on one of 79 designated Bluetooth channels. Each channel has a bandwidth of 1MHz. It usually performs 1600hops per second, with adaptive frequency-hopping (AFH) enabled. Bluetooth Low Energy uses 2MHz spacing, which accommodates 40 channels.
Originally, Gaussian frequency-shift keying (GFSK) modulation was the only modulation scheme available. Since the introduction of Bluetooth 2.0+EDR, π/4-DQPSK (differential quadrature phase-shift keying) and 8-DPSK modulation may also be used between compatible devices. Devices functioning with GFSK are said to be operating in basic rate (BR) mode, where an instantaneous bit rate of 1Mbit/s is possible. The term Enhanced Data Rate (EDR) is used to describe π/4-DPSK (EDR2) and 8-DPSK (EDR3) schemes, each giving 2 and 3Mbit/s respectively. The combination of these (BR and EDR) modes in Bluetooth radio technology is classified as a BR/EDR radio.
In 2019, Apple published an extension called HDR which supports data rates of 4 (HDR4) and 8 (HDR8) Mbit/s using π/4-DQPSK modulation on 4 MHz channels with forward error correction (FEC) .
Bluetooth is a packet-based protocol with a master/slave architecture. One master may communicate with up to seven slaves in a piconet. All devices within a given piconet use the clock provided by the master as the base for packet exchange. The master clock ticks with a period of 312.5μs, two clock ticks then make up a slot of 625µs, and two slots make up a slot pair of 1250µs. In the simple case of single-slot packets, the master transmits in even slots and receives in odd slots. The slave, conversely, receives in even slots and transmits in odd slots. Packets may be 1, 3, or 5 slots long, but in all cases, the master's transmission begins in even slots and the slave's in odd slots.
The above excludes Bluetooth Low Energy, introduced in the 4.0 specification, which uses the same spectrum but somewhat differently.
Communication and connection
A master BR/EDR Bluetooth device can communicate with a maximum of seven devices in a piconet (an ad hoc computer network using Bluetooth technology), though not all devices reach this maximum. The devices can switch roles, by agreement, and the slave can become the master (for example, a headset initiating a connection to a phone necessarily begins as master—as an initiator of the connection—but may subsequently operate as the slave).
The Bluetooth Core Specification provides for the connection of two or more piconets to form a scatternet, in which certain devices simultaneously play the master role in one piconet and the slave role in another.
At any given time, data can be transferred between the master and one other device (except for the little-used broadcast mode). The master chooses which slave device to address; typically, it switches rapidly from one device to another in a round-robin fashion. Since it is the master that chooses which slave to address, whereas a slave is (in theory) supposed to listen in each receive slot, being a master is a lighter burden than being a slave. Being a master of seven slaves is possible; being a slave of more than one master is possible. The specification is vague as to required behavior in scatternets.
Uses
Bluetooth is a standard wire-replacement communications protocol primarily designed for low power consumption, with a short range based on low-cost transceiver microchips in each device. Because the devices use a radio (broadcast) communications system, they do not have to be in visual line of sight of each other; however, a quasi optical wireless path must be viable. Range is power-class-dependent, but effective ranges vary in practice. See the table "Ranges of Bluetooth devices by class".
Officially Class 3 radios have a range of up to , Class 2, most commonly found in mobile devices, , and Class 1, primarily for industrial use cases, . Bluetooth marketing qualifies that Class 1 range is in most cases , and Class 2 range . The actual range achieved by a given link will depend on the qualities of the devices at both ends of the link, as well as the air conditions in between, and other factors.
The effective range varies depending on propagation conditions, material coverage, production sample variations, antenna configurations and battery conditions. Most Bluetooth applications are for indoor conditions, where attenuation of walls and signal fading due to signal reflections make the range far lower than specified line-of-sight ranges of the Bluetooth products.
Most Bluetooth applications are battery-powered Class 2 devices, with little difference in range whether the other end of the link is a Class 1 or Class 2 device as the lower-powered device tends to set the range limit. In some cases the effective range of the data link can be extended when a Class 2 device is connecting to a Class 1 transceiver with both higher sensitivity and transmission power than a typical Class 2 device. Mostly, however, the Class 1 devices have a similar sensitivity to Class 2 devices. Connecting two Class 1 devices with both high sensitivity and high power can allow ranges far in excess of the typical 100m, depending on the throughput required by the application. Some such devices allow open field ranges of up to 1 km and beyond between two similar devices without exceeding legal emission limits.
The Bluetooth Core Specification mandates a range of not less than , but there is no upper limit on actual range. Manufacturers' implementations can be tuned to provide the range needed for each case.
Bluetooth profile
To use Bluetooth wireless technology, a device must be able to interpret certain Bluetooth profiles, which are definitions of possible applications and specify general behaviors that Bluetooth-enabled devices use to communicate with other Bluetooth devices. These profiles include settings to parameterize and to control the communication from the start. Adherence to profiles saves the time for transmitting the parameters anew before the bi-directional link becomes effective. There are a wide range of Bluetooth profiles that describe many different types of applications or use cases for devices.
List of applications
Wireless control and communication between a mobile phone and a handsfree headset. This was one of the earliest applications to become popular.
Wireless control of and communication between a mobile phone and a Bluetooth compatible car stereo system (and sometimes between the SIM card and the car phone).
Wireless communication between a smartphone and a smart lock for unlocking doors.
Wireless control of and communication with iOS and Android device phones, tablets and portable wireless speakers.
Wireless Bluetooth headset and intercom. Idiomatically, a headset is sometimes called "a Bluetooth".
Wireless streaming of audio to headphones with or without communication capabilities.
Wireless streaming of data collected by Bluetooth-enabled fitness devices to phone or PC.
Wireless networking between PCs in a confined space and where little bandwidth is required.
Wireless communication with PC input and output devices, the most common being the mouse, keyboard and printer.
Transfer of files, contact details, calendar appointments, and reminders between devices with OBEX and sharing directories via FTP.
Replacement of previous wired RS-232 serial communications in test equipment, GPS receivers, medical equipment, bar code scanners, and traffic control devices.
For controls where infrared was often used.
For low bandwidth applications where higher USB bandwidth is not required and cable-free connection desired.
Sending small advertisements from Bluetooth-enabled advertising hoardings to other, discoverable, Bluetooth devices.
Wireless bridge between two Industrial Ethernet (e.g., PROFINET) networks.
Seventh and eighth generation game consoles such as Nintendo's Wii, and Sony's PlayStation 3 use Bluetooth for their respective wireless controllers.
Dial-up internet access on personal computers or PDAs using a data-capable mobile phone as a wireless modem.
Short-range transmission of health sensor data from medical devices to mobile phone, set-top box or dedicated telehealth devices.
Allowing a DECT phone to ring and answer calls on behalf of a nearby mobile phone.
Real-time location systems (RTLS) are used to track and identify the location of objects in real time using "Nodes" or "tags" attached to, or embedded in, the objects tracked, and "Readers" that receive and process the wireless signals from these tags to determine their locations.
Personal security application on mobile phones for prevention of theft or loss of items. The protected item has a Bluetooth marker (e.g., a tag) that is in constant communication with the phone. If the connection is broken (the marker is out of range of the phone) then an alarm is raised. This can also be used as a man overboard alarm.
Calgary, Alberta, Canada's Roads Traffic division uses data collected from travelers' Bluetooth devices to predict travel times and road congestion for motorists.
Wireless transmission of audio (a more reliable alternative to FM transmitters)
Live video streaming to the visual cortical implant device by Nabeel Fattah in Newcastle university 2017.
Connection of motion controllers to a PC when using VR headsets
Bluetooth vs Wi-Fi (IEEE 802.11)
Bluetooth and Wi-Fi (Wi-Fi is the brand name for products using IEEE 802.11 standards) have some similar applications: setting up networks, printing, or transferring files. Wi-Fi is intended as a replacement for high-speed cabling for general local area network access in work areas or home. This category of applications is sometimes called wireless local area networks (WLAN). Bluetooth was intended for portable equipment and its applications. The category of applications is outlined as the wireless personal area network (WPAN). Bluetooth is a replacement for cabling in various personally carried applications in any setting and also works for fixed location applications such as smart energy functionality in the home (thermostats, etc.).
Wi-Fi and Bluetooth are to some extent complementary in their applications and usage. Wi-Fi is usually access point-centered, with an asymmetrical client-server connection with all traffic routed through the access point, while Bluetooth is usually symmetrical, between two Bluetooth devices. Bluetooth serves well in simple applications where two devices need to connect with a minimal configuration like a button press, as in headsets and speakers.
Devices
Bluetooth exists in numerous products such as telephones, speakers, tablets, media players, robotics systems, laptops, and console gaming equipment as well as some high definition headsets, modems, hearing aids and even watches. Given the variety of devices which use the Bluetooth, coupled with the contemporary deprecation of headphone jacks by Apple, Google, and other companies, and the lack of regulation by the FCC, the technology is prone to interference. Nonetheless Bluetooth is useful when transferring information between two or more devices that are near each other in low-bandwidth situations. Bluetooth is commonly used to transfer sound data with telephones (i.e., with a Bluetooth headset) or byte data with hand-held computers (transferring files).
Bluetooth protocols simplify the discovery and setup of services between devices. Bluetooth devices can advertise all of the services they provide. This makes using services easier, because more of the security, network address and permission configuration can be automated than with many other network types.
Computer requirements
A personal computer that does not have embedded Bluetooth can use a Bluetooth adapter that enables the PC to communicate with Bluetooth devices. While some desktop computers and most recent laptops come with a built-in Bluetooth radio, others require an external adapter, typically in the form of a small USB "dongle."
Unlike its predecessor, IrDA, which requires a separate adapter for each device, Bluetooth lets multiple devices communicate with a computer over a single adapter.
Operating system implementation
For Microsoft platforms, Windows XP Service Pack 2 and SP3 releases work natively with Bluetooth v1.1, v2.0 and v2.0+EDR. Previous versions required users to install their Bluetooth adapter's own drivers, which were not directly supported by Microsoft. Microsoft's own Bluetooth dongles (packaged with their Bluetooth computer devices) have no external drivers and thus require at least Windows XP Service Pack 2. Windows Vista RTM/SP1 with the Feature Pack for Wireless or Windows Vista SP2 work with Bluetooth v2.1+EDR. Windows 7 works with Bluetooth v2.1+EDR and Extended Inquiry Response (EIR).
The Windows XP and Windows Vista/Windows 7 Bluetooth stacks support the following Bluetooth profiles natively: PAN, SPP, DUN, HID, HCRP. The Windows XP stack can be replaced by a third party stack that supports more profiles or newer Bluetooth versions. The Windows Vista/Windows 7 Bluetooth stack supports vendor-supplied additional profiles without requiring that the Microsoft stack be replaced. It is generally recommended to install the latest vendor driver and its associated stack to be able to use the Bluetooth device at its fullest extent.
Apple products have worked with Bluetooth since Mac OSX v10.2, which was released in 2002.
Linux has two popular Bluetooth stacks, BlueZ and Fluoride. The BlueZ stack is included with most Linux kernels and was originally developed by Qualcomm. Fluoride, earlier known as Bluedroid is included in Android OS and was originally developed by Broadcom.
There is also Affix stack, developed by Nokia. It was once popular, but has not been updated since 2005.
FreeBSD has included Bluetooth since its v5.0 release, implemented through netgraph.
NetBSD has included Bluetooth since its v4.0 release. Its Bluetooth stack was ported to OpenBSD as well, however OpenBSD later removed it as unmaintained.
DragonFly BSD has had NetBSD's Bluetooth implementation since 1.11 (2008). A netgraph-based implementation from FreeBSD has also been available in the tree, possibly disabled until 2014-11-15, and may require more work.
Specifications and features
The specifications were formalized by the Bluetooth Special Interest Group (SIG) and formally announced on 20 May 1998. Today it has a membership of over 30,000 companies worldwide. It was established by Ericsson, IBM, Intel, Nokia and Toshiba, and later joined by many other companies.
All versions of the Bluetooth standards support downward compatibility. That lets the latest standard cover all older versions.
The Bluetooth Core Specification Working Group (CSWG) produces mainly 4 kinds of specifications:
The Bluetooth Core Specification, release cycle is typically a few years in between
Core Specification Addendum (CSA), release cycle can be as tight as a few times per year
Core Specification Supplements (CSS), can be released very quickly
Errata (Available with a user account: Errata login)
Bluetooth 1.0 and 1.0B
Products weren't interoperable
Anonymity wasn't possible, preventing certain services from using Bluetooth environments
Bluetooth 1.1
Ratified as IEEE Standard 802.15.1–2002
Many errors found in the v1.0B specifications were fixed.
Added possibility of non-encrypted channels.
Received Signal Strength Indicator (RSSI).
Bluetooth 1.2
Major enhancements include:
Faster Connection and Discovery
Adaptive frequency-hopping spread spectrum (AFH), which improves resistance to radio frequency interference by avoiding the use of crowded frequencies in the hopping sequence.
Higher transmission speeds in practice than in v1.1, up to 721 kbit/s.
Extended Synchronous Connections (eSCO), which improve voice quality of audio links by allowing retransmissions of corrupted packets, and may optionally increase audio latency to provide better concurrent data transfer.
Host Controller Interface (HCI) operation with three-wire UART.
Ratified as IEEE Standard 802.15.1–2005
Introduced Flow Control and Retransmission Modes for L2CAP.
Bluetooth 2.0 + EDR
This version of the Bluetooth Core Specification was released before 2005. The main difference is the introduction of an Enhanced Data Rate (EDR) for faster data transfer. The bit rate of EDR is 3Mbit/s, although the maximum data transfer rate (allowing for inter-packet time and acknowledgements) is 2.1Mbit/s. EDR uses a combination of GFSK and phase-shift keying modulation (PSK) with two variants, π/4-DQPSK and 8-DPSK. EDR can provide a lower power consumption through a reduced duty cycle.
The specification is published as Bluetooth v2.0 + EDR, which implies that EDR is an optional feature. Aside from EDR, the v2.0 specification contains other minor improvements, and products may claim compliance to "Bluetooth v2.0" without supporting the higher data rate. At least one commercial device states "Bluetooth v2.0 without EDR" on its data sheet.
Bluetooth 2.1 + EDR
Bluetooth Core Specification Version 2.1 + EDR was adopted by the Bluetooth SIG on 26 July 2007.
The headline feature of v2.1 is secure simple pairing (SSP): this improves the pairing experience for Bluetooth devices, while increasing the use and strength of security.
Version 2.1 allows various other improvements, including extended inquiry response (EIR), which provides more information during the inquiry procedure to allow better filtering of devices before connection; and sniff subrating, which reduces the power consumption in low-power mode.
Bluetooth 3.0 + HS
Version 3.0 + HS of the Bluetooth Core Specification was adopted by the Bluetooth SIG on 21 April 2009. Bluetooth v3.0 + HS provides theoretical data transfer speeds of up to 24 Mbit/s, though not over the Bluetooth link itself. Instead, the Bluetooth link is used for negotiation and establishment, and the high data rate traffic is carried over a colocated 802.11 link.
The main new feature is AMP (Alternative MAC/PHY), the addition of 802.11 as a high-speed transport. The high-speed part of the specification is not mandatory, and hence only devices that display the "+HS" logo actually support Bluetooth over 802.11 high-speed data transfer. A Bluetooth v3.0 device without the "+HS" suffix is only required to support features introduced in Core Specification Version 3.0 or earlier Core Specification Addendum 1.
L2CAP Enhanced modes Enhanced Retransmission Mode (ERTM) implements reliable L2CAP channel, while Streaming Mode (SM) implements unreliable channel with no retransmission or flow control. Introduced in Core Specification Addendum 1.
Alternative MAC/PHY Enables the use of alternative MAC and PHYs for transporting Bluetooth profile data. The Bluetooth radio is still used for device discovery, initial connection and profile configuration. However, when large quantities of data must be sent, the high-speed alternative MAC PHY 802.11 (typically associated with Wi-Fi) transports the data. This means that Bluetooth uses proven low power connection models when the system is idle, and the faster radio when it must send large quantities of data. AMP links require enhanced L2CAP modes.
Unicast Connectionless Data Permits sending service data without establishing an explicit L2CAP channel. It is intended for use by applications that require low latency between user action and reconnection/transmission of data. This is only appropriate for small amounts of data.
Enhanced Power Control Updates the power control feature to remove the open loop power control, and also to clarify ambiguities in power control introduced by the new modulation schemes added for EDR. Enhanced power control removes the ambiguities by specifying the behavior that is expected. The feature also adds closed loop power control, meaning RSSI filtering can start as the response is received. Additionally, a "go straight to maximum power" request has been introduced. This is expected to deal with the headset link loss issue typically observed when a user puts their phone into a pocket on the opposite side to the headset.
Ultra-wideband
The high-speed (AMP) feature of Bluetooth v3.0 was originally intended for UWB, but the WiMedia Alliance, the body responsible for the flavor of UWB intended for Bluetooth, announced in March 2009 that it was disbanding, and ultimately UWB was omitted from the Core v3.0 specification.
On 16 March 2009, the WiMedia Alliance announced it was entering into technology transfer agreements for the WiMedia Ultra-wideband (UWB) specifications. WiMedia has transferred all current and future specifications, including work on future high-speed and power-optimized implementations, to the Bluetooth Special Interest Group (SIG), Wireless USB Promoter Group and the USB Implementers Forum. After successful completion of the technology transfer, marketing, and related administrative items, the WiMedia Alliance ceased operations.
In October 2009, the Bluetooth Special Interest Group suspended development of UWB as part of the alternative MAC/PHY, Bluetooth v3.0 + HS solution. A small, but significant, number of former WiMedia members had not and would not sign up to the necessary agreements for the IP transfer. As of 2009, the Bluetooth SIG was in the process of evaluating other options for its longer term roadmap.
Bluetooth 4.0
The Bluetooth SIG completed the Bluetooth Core Specification version 4.0 (called Bluetooth Smart) and has been adopted . It includes Classic Bluetooth, Bluetooth high speed and Bluetooth Low Energy (BLE) protocols. Bluetooth high speed is based on Wi-Fi, and Classic Bluetooth consists of legacy Bluetooth protocols.
Bluetooth Low Energy, previously known as Wibree, is a subset of Bluetooth v4.0 with an entirely new protocol stack for rapid build-up of simple links. As an alternative to the Bluetooth standard protocols that were introduced in Bluetooth v1.0 to v3.0, it is aimed at very low power applications powered by a coin cell. Chip designs allow for two types of implementation, dual-mode, single-mode and enhanced past versions. The provisional names Wibree and Bluetooth ULP (Ultra Low Power) were abandoned and the BLE name was used for a while. In late 2011, new logos "Bluetooth Smart Ready" for hosts and "Bluetooth Smart" for sensors were introduced as the general-public face of BLE.
Compared to Classic Bluetooth, Bluetooth Low Energy is intended to provide considerably reduced power consumption and cost while maintaining a similar communication range. In terms of lengthening the battery life of Bluetooth devices, represents a significant progression.
In a single-mode implementation, only the low energy protocol stack is implemented. Dialog Semiconductor, STMicroelectronics, AMICCOM, CSR, Nordic Semiconductor and Texas Instruments have released single mode Bluetooth Low Energy solutions.
In a dual-mode implementation, Bluetooth Smart functionality is integrated into an existing Classic Bluetooth controller. , the following semiconductor companies have announced the availability of chips meeting the standard: Qualcomm-Atheros, CSR, Broadcom and Texas Instruments. The compliant architecture shares all of Classic Bluetooth's existing radio and functionality resulting in a negligible cost increase compared to Classic Bluetooth.
Cost-reduced single-mode chips, which enable highly integrated and compact devices, feature a lightweight Link Layer providing ultra-low power idle mode operation, simple device discovery, and reliable point-to-multipoint data transfer with advanced power-save and secure encrypted connections at the lowest possible cost.
General improvements in version 4.0 include the changes necessary to facilitate BLE modes, as well the Generic Attribute Profile (GATT) and Security Manager (SM) services with AES Encryption.
Core Specification Addendum 2 was unveiled in December 2011; it contains improvements to the audio Host Controller Interface and to the High Speed (802.11) Protocol Adaptation Layer.
Core Specification Addendum 3 revision 2 has an adoption date of 24 July 2012.
Core Specification Addendum 4 has an adoption date of 12 February 2013.
Bluetooth 4.1
The Bluetooth SIG announced formal adoption of the Bluetooth v4.1 specification on 4 December 2013. This specification is an incremental software update to Bluetooth Specification v4.0, and not a hardware update. The update incorporates Bluetooth Core Specification Addenda (CSA 1, 2, 3 & 4) and adds new features that improve consumer usability. These include increased co-existence support for LTE, bulk data exchange rates—and aid developer innovation by allowing devices to support multiple roles simultaneously.
New features of this specification include:
Mobile Wireless Service Coexistence Signaling
Train Nudging and Generalized Interlaced Scanning
Low Duty Cycle Directed Advertising
L2CAP Connection Oriented and Dedicated Channels with Credit-Based Flow Control
Dual Mode and Topology
LE Link Layer Topology
802.11n PAL
Audio Architecture Updates for Wide Band Speech
Fast Data Advertising Interval
Limited Discovery Time
Notice that some features were already available in a Core Specification Addendum (CSA) before the release of v4.1.
Bluetooth 4.2
Released on 2 December 2014, it introduces features for the Internet of Things.
The major areas of improvement are:
Low Energy Secure Connection with Data Packet Length Extension
Link Layer Privacy with Extended Scanner Filter Policies
Internet Protocol Support Profile (IPSP) version 6 ready for Bluetooth Smart things to support connected home
Older Bluetooth hardware may receive 4.2 features such as Data Packet Length Extension and improved privacy via firmware updates.
Bluetooth 5
The Bluetooth SIG released Bluetooth 5 on 6 December 2016. Its new features are mainly focused on new Internet of Things technology. Sony was the first to announce Bluetooth 5.0 support with its Xperia XZ Premium in Feb 2017 during the Mobile World Congress 2017. The Samsung Galaxy S8 launched with Bluetooth 5 support in April 2017. In September 2017, the iPhone 8, 8 Plus and iPhone X launched with Bluetooth 5 support as well. Apple also integrated Bluetooth 5 in its new HomePod offering released on 9 February 2018. Marketing drops the point number; so that it is just "Bluetooth 5" (unlike Bluetooth 4.0); the change is for the sake of "Simplifying our marketing, communicating user benefits more effectively and making it easier to signal significant technology updates to the market."
Bluetooth 5 provides, for , options that can double the speed (2Mbit/s burst) at the expense of range, or provide up to four times the range at the expense of data rate. The increase in transmissions could be important for Internet of Things devices, where many nodes connect throughout a whole house. Bluetooth 5 increases capacity of connectionless services such as location-relevant navigation of low-energy Bluetooth connections.
The major areas of improvement are:
Slot Availability Mask (SAM)
2 Mbit/s PHY for
LE Long Range
High Duty Cycle Non-Connectable Advertising
LE Advertising Extensions
LE Channel Selection Algorithm #2
Features Added in CSA5 – Integrated in v5.0:
Higher Output Power
The following features were removed in this version of the specification:
Park State
Bluetooth 5.1
The Bluetooth SIG presented Bluetooth 5.1 on 21 January 2019.
The major areas of improvement are:
Angle of Arrival (AoA) and Angle of Departure (AoD) which are used for locating and tracking of devices
Advertising Channel Index
GATT Caching
Minor Enhancements batch 1:
HCI support for debug keys in LE Secure Connections
Sleep clock accuracy update mechanism
ADI field in scan response data
Interaction between QoS and Flow Specification
Block Host channel classification for secondary advertising
Allow the SID to appear in scan response reports
Specify the behavior when rules are violated
Periodic Advertising Sync Transfer
Features Added in Core Specification Addendum (CSA) 6 – Integrated in v5.1:
Models
Mesh-based model hierarchy
The following features were removed in this version of the specification:
Unit keys
Bluetooth 5.2
On 31 December 2019, the Bluetooth SIG published the Bluetooth Core Specification Version 5.2. The new specification adds new features:
Enhanced Attribute Protocol (EATT), an improved version of the Attribute Protocol (ATT)
LE Power Control
LE Isochronous Channels
LE Audio that is built on top of the new 5.2 features. BT LE Audio was announced in January 2020 at CES by the Bluetooth SIG. Compared to regular Bluetooth Audio, Bluetooth Low Energy Audio makes lower battery consumption possible and creates a standardized way of transmitting audio over BT LE. Bluetooth LE Audio also allows one-to-many and many-to-one broadcasts, allowing multiple receivers from one source or one receiver for multiple sources. It uses a new LC3 codec. BLE Audio will also add support for hearing aids.
Bluetooth 5.3
The Bluetooth SIG published the Bluetooth Core Specification Version 5.3 on July 13, 2021. The feature enhancements of Bluetooth 5.3 are:
Connection Subrating
Periodic Advertisement Interval
Channel Classification Enhancement
Encryption Key Size Control Enhancements
The following features were removed in this version of the specification:
Alternate MAC and PHY (AMP) Extension
Technical information
Architecture
Software
Seeking to extend the compatibility of Bluetooth devices, the devices that adhere to the standard use an interface called HCI (Host Controller Interface) between the host device (e.g. laptop, phone) and the Bluetooth device (e.g. Bluetooth wireless headset).
High-level protocols such as the SDP (Protocol used to find other Bluetooth devices within the communication range, also responsible for detecting the function of devices in range), RFCOMM (Protocol used to emulate serial port connections) and TCS (Telephony control protocol) interact with the baseband controller through the L2CAP (Logical Link Control and Adaptation Protocol). The L2CAP protocol is responsible for the segmentation and reassembly of the packets.
Hardware
The hardware that makes up the Bluetooth device is made up of, logically, two parts; which may or may not be physically separate. A radio device, responsible for modulating and transmitting the signal; and a digital controller. The digital controller is likely a CPU, one of whose functions is to run a Link Controller; and interfaces with the host device; but some functions may be delegated to hardware. The Link Controller is responsible for the processing of the baseband and the management of ARQ and physical layer FEC protocols. In addition, it handles the transfer functions (both asynchronous and synchronous), audio coding (e.g. SBC (codec)) and data encryption. The CPU of the device is responsible for attending the instructions related to Bluetooth of the host device, in order to simplify its operation. To do this, the CPU runs software called Link Manager that has the function of communicating with other devices through the LMP protocol.
A Bluetooth device is a short-range wireless device. Bluetooth devices are fabricated on RF CMOS integrated circuit (RF circuit) chips.
Bluetooth protocol stack
Bluetooth is defined as a layer protocol architecture consisting of core protocols, cable replacement protocols, telephony control protocols, and adopted protocols. Mandatory protocols for all Bluetooth stacks are LMP, L2CAP and SDP. In addition, devices that communicate with Bluetooth almost universally can use these protocols: HCI and RFCOMM.
Link Manager
The Link Manager (LM) is the system that manages establishing the connection between devices. It is responsible for the establishment, authentication and configuration of the link. The Link Manager locates other managers and communicates with them via the management protocol of the LMP link. To perform its function as a service provider, the LM uses the services included in the Link Controller (LC).
The Link Manager Protocol basically consists of several PDUs (Protocol Data Units) that are sent from one device to another. The following is a list of supported services:
Transmission and reception of data.
Name request
Request of the link addresses.
Establishment of the connection.
Authentication.
Negotiation of link mode and connection establishment.
Host Controller Interface
The Host Controller Interface provides a command interface for the controller and for the link manager, which allows access to the hardware status and control registers.
This interface provides an access layer for all Bluetooth devices. The HCI layer of the machine exchanges commands and data with the HCI firmware present in the Bluetooth device. One of the most important HCI tasks that must be performed is the automatic discovery of other Bluetooth devices that are within the coverage radius.
Logical Link Control and Adaptation Protocol
The Logical Link Control and Adaptation Protocol (L2CAP) is used to multiplex multiple logical connections between two devices using different higher level protocols.
Provides segmentation and reassembly of on-air packets.
In Basic mode, L2CAP provides packets with a payload configurable up to 64 kB, with 672 bytes as the default MTU, and 48 bytes as the minimum mandatory supported MTU.
In Retransmission and Flow Control modes, L2CAP can be configured either for isochronous data or reliable data per channel by performing retransmissions and CRC checks.
Bluetooth Core Specification Addendum 1 adds two additional L2CAP modes to the core specification. These modes effectively deprecate original Retransmission and Flow Control modes:
Enhanced Retransmission Mode (ERTM) This mode is an improved version of the original retransmission mode. This mode provides a reliable L2CAP channel.
Streaming Mode (SM) This is a very simple mode, with no retransmission or flow control. This mode provides an unreliable L2CAP channel.
Reliability in any of these modes is optionally and/or additionally guaranteed by the lower layer Bluetooth BDR/EDR air interface by configuring the number of retransmissions and flush timeout (time after which the radio flushes packets). In-order sequencing is guaranteed by the lower layer.
Only L2CAP channels configured in ERTM or SM may be operated over AMP logical links.
Service Discovery Protocol
The Service Discovery Protocol (SDP) allows a device to discover services offered by other devices, and their associated parameters. For example, when you use a mobile phone with a Bluetooth headset, the phone uses SDP to determine which Bluetooth profiles the headset can use (Headset Profile, Hands Free Profile (HFP), Advanced Audio Distribution Profile (A2DP) etc.) and the protocol multiplexer settings needed for the phone to connect to the headset using each of them. Each service is identified by a Universally Unique Identifier (UUID), with official services (Bluetooth profiles) assigned a short form UUID (16 bits rather than the full 128).
Radio Frequency Communications
Radio Frequency Communications (RFCOMM) is a cable replacement protocol used for generating a virtual serial data stream. RFCOMM provides for binary data transport and emulates EIA-232 (formerly RS-232) control signals over the Bluetooth baseband layer, i.e., it is a serial port emulation.
RFCOMM provides a simple, reliable, data stream to the user, similar to TCP. It is used directly by many telephony related profiles as a carrier for AT commands, as well as being a transport layer for OBEX over Bluetooth.
Many Bluetooth applications use RFCOMM because of its widespread support and publicly available API on most operating systems. Additionally, applications that used a serial port to communicate can be quickly ported to use RFCOMM.
Bluetooth Network Encapsulation Protocol
The Bluetooth Network Encapsulation Protocol (BNEP) is used for transferring another protocol stack's data via an L2CAP channel.
Its main purpose is the transmission of IP packets in the Personal Area Networking Profile.
BNEP performs a similar function to SNAP in Wireless LAN.
Audio/Video Control Transport Protocol
The Audio/Video Control Transport Protocol (AVCTP) is used by the remote control profile to transfer AV/C commands over an L2CAP channel. The music control buttons on a stereo headset use this protocol to control the music player.
Audio/Video Distribution Transport Protocol
The Audio/Video Distribution Transport Protocol (AVDTP) is used by the advanced audio distribution (A2DP) profile to stream music to stereo headsets over an L2CAP channel intended for video distribution profile in the Bluetooth transmission.
Telephony Control Protocol
The Telephony Control Protocol– Binary (TCS BIN) is the bit-oriented protocol that defines the call control signaling for the establishment of voice and data calls between Bluetooth devices. Additionally, "TCS BIN defines mobility management procedures for handling groups of Bluetooth TCS devices."
TCS-BIN is only used by the cordless telephony profile, which failed to attract implementers. As such it is only of historical interest.
Adopted protocols
Adopted protocols are defined by other standards-making organizations and incorporated into Bluetooth's protocol stack, allowing Bluetooth to code protocols only when necessary. The adopted protocols include:
Point-to-Point Protocol (PPP) Internet standard protocol for transporting IP datagrams over a point-to-point link.
TCP/IP/UDP Foundation Protocols for TCP/IP protocol suite
Object Exchange Protocol (OBEX) Session-layer protocol for the exchange of objects, providing a model for object and operation representation
Wireless Application Environment/Wireless Application Protocol (WAE/WAP) WAE specifies an application framework for wireless devices and WAP is an open standard to provide mobile users access to telephony and information services.
Baseband error correction
Depending on packet type, individual packets may be protected by error correction, either 1/3 rate forward error correction (FEC) or 2/3 rate. In addition, packets with CRC will be retransmitted until acknowledged by automatic repeat request (ARQ).
Setting up connections
Any Bluetooth device in discoverable mode transmits the following information on demand:
Device name
Device class
List of services
Technical information (for example: device features, manufacturer, Bluetooth specification used, clock offset)
Any device may perform an inquiry to find other devices to connect to, and any device can be configured to respond to such inquiries. However, if the device trying to connect knows the address of the device, it always responds to direct connection requests and transmits the information shown in the list above if requested. Use of a device's services may require pairing or acceptance by its owner, but the connection itself can be initiated by any device and held until it goes out of range. Some devices can be connected to only one device at a time, and connecting to them prevents them from connecting to other devices and appearing in inquiries until they disconnect from the other device.
Every device has a unique 48-bit address. However, these addresses are generally not shown in inquiries. Instead, friendly Bluetooth names are used, which can be set by the user. This name appears when another user scans for devices and in lists of paired devices.
Most cellular phones have the Bluetooth name set to the manufacturer and model of the phone by default. Most cellular phones and laptops show only the Bluetooth names and special programs are required to get additional information about remote devices. This can be confusing as, for example, there could be several cellular phones in range named T610 (see Bluejacking).
Pairing and bonding
Motivation
Many services offered over Bluetooth can expose private data or let a connecting party control the Bluetooth device. Security reasons make it necessary to recognize specific devices, and thus enable control over which devices can connect to a given Bluetooth device. At the same time, it is useful for Bluetooth devices to be able to establish a connection without user intervention (for example, as soon as in range).
To resolve this conflict, Bluetooth uses a process called bonding, and a bond is generated through a process called pairing. The pairing process is triggered either by a specific request from a user to generate a bond (for example, the user explicitly requests to "Add a Bluetooth device"), or it is triggered automatically when connecting to a service where (for the first time) the identity of a device is required for security purposes. These two cases are referred to as dedicated bonding and general bonding respectively.
Pairing often involves some level of user interaction. This user interaction confirms the identity of the devices. When pairing completes, a bond forms between the two devices, enabling those two devices to connect in the future without repeating the pairing process to confirm device identities. When desired, the user can remove the bonding relationship.
Implementation
During pairing, the two devices establish a relationship by creating a shared secret known as a link key. If both devices store the same link key, they are said to be paired or bonded. A device that wants to communicate only with a bonded device can cryptographically authenticate the identity of the other device, ensuring it is the same device it previously paired with. Once a link key is generated, an authenticated Asynchronous Connection-Less (ACL) link between the devices may be encrypted to protect exchanged data against eavesdropping. Users can delete link keys from either device, which removes the bond between the devices—so it is possible for one device to have a stored link key for a device it is no longer paired with.
Bluetooth services generally require either encryption or authentication and as such require pairing before they let a remote device connect. Some services, such as the Object Push Profile, elect not to explicitly require authentication or encryption so that pairing does not interfere with the user experience associated with the service use-cases.
Pairing mechanisms
Pairing mechanisms changed significantly with the introduction of Secure Simple Pairing in Bluetooth v2.1. The following summarizes the pairing mechanisms:
Legacy pairing: This is the only method available in Bluetooth v2.0 and before. Each device must enter a PIN code; pairing is only successful if both devices enter the same PIN code. Any 16-byte UTF-8 string may be used as a PIN code; however, not all devices may be capable of entering all possible PIN codes.
Limited input devices: The obvious example of this class of device is a Bluetooth Hands-free headset, which generally have few inputs. These devices usually have a fixed PIN, for example "0000" or "1234", that are hard-coded into the device.
Numeric input devices: Mobile phones are classic examples of these devices. They allow a user to enter a numeric value up to 16 digits in length.
Alpha-numeric input devices: PCs and smartphones are examples of these devices. They allow a user to enter full UTF-8 text as a PIN code. If pairing with a less capable device the user must be aware of the input limitations on the other device; there is no mechanism available for a capable device to determine how it should limit the available input a user may use.
Secure Simple Pairing (SSP): This is required by Bluetooth v2.1, although a Bluetooth v2.1 device may only use legacy pairing to interoperate with a v2.0 or earlier device. Secure Simple Pairing uses a form of public-key cryptography, and some types can help protect against man in the middle, or MITM attacks. SSP has the following authentication mechanisms:
Just works: As the name implies, this method just works, with no user interaction. However, a device may prompt the user to confirm the pairing process. This method is typically used by headsets with minimal IO capabilities, and is more secure than the fixed PIN mechanism this limited set of devices uses for legacy pairing. This method provides no man-in-the-middle (MITM) protection.
Numeric comparison: If both devices have a display, and at least one can accept a binary yes/no user input, they may use Numeric Comparison. This method displays a 6-digit numeric code on each device. The user should compare the numbers to ensure they are identical. If the comparison succeeds, the user(s) should confirm pairing on the device(s) that can accept an input. This method provides MITM protection, assuming the user confirms on both devices and actually performs the comparison properly.
Passkey Entry: This method may be used between a device with a display and a device with numeric keypad entry (such as a keyboard), or two devices with numeric keypad entry. In the first case, the display presents a 6-digit numeric code to the user, who then enters the code on the keypad. In the second case, the user of each device enters the same 6-digit number. Both of these cases provide MITM protection.
Out of band (OOB): This method uses an external means of communication, such as near-field communication (NFC) to exchange some information used in the pairing process. Pairing is completed using the Bluetooth radio, but requires information from the OOB mechanism. This provides only the level of MITM protection that is present in the OOB mechanism.
SSP is considered simple for the following reasons:
In most cases, it does not require a user to generate a passkey.
For use cases not requiring MITM protection, user interaction can be eliminated.
For numeric comparison, MITM protection can be achieved with a simple equality comparison by the user.
Using OOB with NFC enables pairing when devices simply get close, rather than requiring a lengthy discovery process.
Security concerns
Prior to Bluetooth v2.1, encryption is not required and can be turned off at any time. Moreover, the encryption key is only good for approximately 23.5 hours; using a single encryption key longer than this time allows simple XOR attacks to retrieve the encryption key.
Turning off encryption is required for several normal operations, so it is problematic to detect if encryption is disabled for a valid reason or a security attack.
Bluetooth v2.1 addresses this in the following ways:
Encryption is required for all non-SDP (Service Discovery Protocol) connections
A new Encryption Pause and Resume feature is used for all normal operations that require that encryption be disabled. This enables easy identification of normal operation from security attacks.
The encryption key must be refreshed before it expires.
Link keys may be stored on the device file system, not on the Bluetooth chip itself. Many Bluetooth chip manufacturers let link keys be stored on the device—however, if the device is removable, this means that the link key moves with the device.
Security
Overview
Bluetooth implements confidentiality, authentication and key derivation with custom algorithms based on the SAFER+ block cipher. Bluetooth key generation is generally based on a Bluetooth PIN, which must be entered into both devices. This procedure might be modified if one of the devices has a fixed PIN (e.g., for headsets or similar devices with a restricted user interface). During pairing, an initialization key or master key is generated, using the E22 algorithm.
The E0 stream cipher is used for encrypting packets, granting confidentiality, and is based on a shared cryptographic secret, namely a previously generated link key or master key. Those keys, used for subsequent encryption of data sent via the air interface, rely on the Bluetooth PIN, which has been entered into one or both devices.
An overview of Bluetooth vulnerabilities exploits was published in 2007 by Andreas Becker.
In September 2008, the National Institute of Standards and Technology (NIST) published a Guide to Bluetooth Security as a reference for organizations. It describes Bluetooth security capabilities and how to secure Bluetooth technologies effectively. While Bluetooth has its benefits, it is susceptible to denial-of-service attacks, eavesdropping, man-in-the-middle attacks, message modification, and resource misappropriation. Users and organizations must evaluate their acceptable level of risk and incorporate security into the lifecycle of Bluetooth devices. To help mitigate risks, included in the NIST document are security checklists with guidelines and recommendations for creating and maintaining secure Bluetooth piconets, headsets, and smart card readers.
Bluetooth v2.1 – finalized in 2007 with consumer devices first appearing in 2009 – makes significant changes to Bluetooth's security, including pairing. See the pairing mechanisms section for more about these changes.
Bluejacking
Bluejacking is the sending of either a picture or a message from one user to an unsuspecting user through Bluetooth wireless technology. Common applications include short messages, e.g., "You've just been bluejacked!" Bluejacking does not involve the removal or alteration of any data from the device. Bluejacking can also involve taking control of a mobile device wirelessly and phoning a premium rate line, owned by the bluejacker. Security advances have alleviated this issue .
History of security concerns
2001–2004
In 2001, Jakobsson and Wetzel from Bell Laboratories discovered flaws in the Bluetooth pairing protocol and also pointed to vulnerabilities in the encryption scheme. In 2003, Ben and Adam Laurie from A.L. Digital Ltd. discovered that serious flaws in some poor implementations of Bluetooth security may lead to disclosure of personal data. In a subsequent experiment, Martin Herfurt from the trifinite.group was able to do a field-trial at the CeBIT fairgrounds, showing the importance of the problem to the world. A new attack called BlueBug was used for this experiment. In 2004 the first purported virus using Bluetooth to spread itself among mobile phones appeared on the Symbian OS.
The virus was first described by Kaspersky Lab and requires users to confirm the installation of unknown software before it can propagate. The virus was written as a proof-of-concept by a group of virus writers known as "29A" and sent to anti-virus groups. Thus, it should be regarded as a potential (but not real) security threat to Bluetooth technology or Symbian OS since the virus has never spread outside of this system. In August 2004, a world-record-setting experiment (see also Bluetooth sniping) showed that the range of Class 2 Bluetooth radios could be extended to with directional antennas and signal amplifiers.
This poses a potential security threat because it enables attackers to access vulnerable Bluetooth devices from a distance beyond expectation. The attacker must also be able to receive information from the victim to set up a connection. No attack can be made against a Bluetooth device unless the attacker knows its Bluetooth address and which channels to transmit on, although these can be deduced within a few minutes if the device is in use.
2005
In January 2005, a mobile malware worm known as Lasco surfaced. The worm began targeting mobile phones using Symbian OS (Series 60 platform) using Bluetooth enabled devices to replicate itself and spread to other devices. The worm is self-installing and begins once the mobile user approves the transfer of the file (Velasco.sis) from another device. Once installed, the worm begins looking for other Bluetooth enabled devices to infect. Additionally, the worm infects other .SIS files on the device, allowing replication to another device through the use of removable media (Secure Digital, CompactFlash, etc.). The worm can render the mobile device unstable.
In April 2005, Cambridge University security researchers published results of their actual implementation of passive attacks against the PIN-based pairing between commercial Bluetooth devices. They confirmed that attacks are practicably fast, and the Bluetooth symmetric key establishment method is vulnerable. To rectify this vulnerability, they designed an implementation that showed that stronger, asymmetric key establishment is feasible for certain classes of devices, such as mobile phones.
In June 2005, Yaniv Shaked and Avishai Wool published a paper describing both passive and active methods for obtaining the PIN for a Bluetooth link. The passive attack allows a suitably equipped attacker to eavesdrop on communications and spoof if the attacker was present at the time of initial pairing. The active method makes use of a specially constructed message that must be inserted at a specific point in the protocol, to make the master and slave repeat the pairing process. After that, the first method can be used to crack the PIN. This attack's major weakness is that it requires the user of the devices under attack to re-enter the PIN during the attack when the device prompts them to. Also, this active attack probably requires custom hardware, since most commercially available Bluetooth devices are not capable of the timing necessary.
In August 2005, police in Cambridgeshire, England, issued warnings about thieves using Bluetooth enabled phones to track other devices left in cars. Police are advising users to ensure that any mobile networking connections are de-activated if laptops and other devices are left in this way.
2006
In April 2006, researchers from Secure Network and F-Secure published a report that warns of the large number of devices left in a visible state, and issued statistics on the spread of various Bluetooth services and the ease of spread of an eventual Bluetooth worm.
In October 2006, at the Luxemburgish Hack.lu Security Conference, Kevin Finistere and Thierry Zoller demonstrated and released a remote root shell via Bluetooth on Mac OS X v10.3.9 and v10.4. They also demonstrated the first Bluetooth PIN and Linkkeys cracker, which is based on the research of Wool and Shaked.
2017
In April 2017, security researchers at Armis discovered multiple exploits in the Bluetooth software in various platforms, including Microsoft Windows, Linux, Apple iOS, and Google Android. These vulnerabilities are collectively called "BlueBorne". The exploits allow an attacker to connect to devices or systems without authentication and can give them "virtually full control over the device". Armis contacted Google, Microsoft, Apple, Samsung and Linux developers allowing them to patch their software before the coordinated announcement of the vulnerabilities on 12 September 2017.
2018
In July 2018, Lior Neumann and Eli Biham, researchers at the Technion – Israel Institute of Technology identified a security vulnerability in the latest Bluetooth pairing procedures: Secure Simple Pairing and LE Secure Connections.
Also, in October 2018, Karim Lounis, a network security researcher at Queen's University, identified a security vulnerability, called CDV (Connection Dumping Vulnerability), on various Bluetooth devices that allows an attacker to tear down an existing Bluetooth connection and cause the deauthentication and disconnection of the involved devices. The researcher demonstrated the attack on various devices of different categories and from different manufacturers.
2019
In August 2019, security researchers at the Singapore University of Technology and Design, Helmholtz Center for Information Security, and University of Oxford discovered a vulnerability, called KNOB (Key Negotiation Of Bluetooth) in the key negotiation that would "brute force the negotiated encryption keys, decrypt the eavesdropped ciphertext, and inject valid encrypted messages (in real-time)".
Health concerns
Bluetooth uses the radio frequency spectrum in the 2.402GHz to 2.480GHz range, which is non-ionizing radiation, of similar bandwidth to that used by wireless and mobile phones. No specific harm has been demonstrated, even though wireless transmission has been included by IARC in the possible carcinogen list. Maximum power output from a Bluetooth radio is 100mW for class 1, 2.5mW for class 2, and 1mW for class 3 devices. Even the maximum power output of class1 is a lower level than the lowest-powered mobile phones. UMTS and W-CDMA output 250mW, GSM1800/1900 outputs 1000mW, and GSM850/900 outputs 2000mW.
Award programs
The Bluetooth Innovation World Cup, a marketing initiative of the Bluetooth Special Interest Group (SIG), was an international competition that encouraged the development of innovations for applications leveraging Bluetooth technology in sports, fitness and health care products. The competition aimed to stimulate new markets.
The Bluetooth Innovation World Cup morphed into the Bluetooth Breakthrough Awards in 2013. Bluetooth SIG subsequently launched the Imagine Blue Award in 2016 at Bluetooth World. The Bluetooth Breakthrough Awards program highlights the most innovative products and applications available today, prototypes coming soon, and student-led projects in the making.
See also
ANT+
Bluetooth stack – building blocks that make up the various implementations of the Bluetooth protocol
Bluetooth profile – features used within the Bluetooth stack
Bluesniping
BlueSoleil – proprietary Bluetooth driver
Bluetooth Low Energy Beacons (AltBeacon, iBeacon, Eddystone)
Bluetooth Mesh
Continua Health Alliance
DASH7
Headset (audio)
Hotspot (Wi-Fi)
Java APIs for Bluetooth
Key finder
Li-Fi
List of Bluetooth protocols
List of Bluetooth Profiles
MyriaNed
Near-field communication
RuBee – secure wireless protocol alternative
Tethering
Thread (network protocol)
Wi-Fi HaLow
ZigBee – low-power lightweight wireless protocol in the ISM band based on IEEE 802.15.4
Notes
References
External links
Specifications at Bluetooth SIG
Bluetooth
Mobile computers
Networking standards
Wireless communication systems
Telecommunications-related introductions in 1989
Swedish inventions |
3926 | https://en.wikipedia.org/wiki/Blowfish%20%28disambiguation%29 | Blowfish (disambiguation) | Blowfish are species of fish in the family Tetraodontidae.
Blowfish may also refer to:
Porcupinefish, belonging to the family Diodontidae
Blowfish (cipher), an encryption algorithm
Blowfish (company), an American erotic goods supplier
The Blowfish, a satirical newspaper at Brandeis University
Lexington County Blowfish, a baseball team
See also
Hootie & the Blowfish, an American rock band |
3940 | https://en.wikipedia.org/wiki/Blowfish%20%28cipher%29 | Blowfish (cipher) | Blowfish is a symmetric-key block cipher, designed in 1993 by Bruce Schneier and included in many cipher suites and encryption products. Blowfish provides a good encryption rate in software, and no effective cryptanalysis of it has been found to date. However, the Advanced Encryption Standard (AES) now receives more attention, and Schneier recommends Twofish for modern applications.
Schneier designed Blowfish as a general-purpose algorithm, intended as an alternative to the aging DES and free of the problems and constraints associated with other algorithms. At the time Blowfish was released, many other designs were proprietary, encumbered by patents or were commercial or government secrets. Schneier has stated that "Blowfish is unpatented, and will remain so in all countries. The algorithm is hereby placed in the public domain, and can be freely used by anyone."
Notable features of the design include key-dependent S-boxes and a highly complex key schedule.
The algorithm
Blowfish has a 64-bit block size and a variable key length from 32 bits up to 448 bits. It is a 16-round Feistel cipher and uses large key-dependent S-boxes. In structure it resembles CAST-128, which uses fixed S-boxes.
The adjacent diagram shows Blowfish's encryption routine. Each line represents 32 bits. There are five subkey-arrays: one 18-entry P-array (denoted as K in the diagram, to avoid confusion with the Plaintext) and four 256-entry S-boxes (S0, S1, S2 and S3).
Every round r consists of 4 actions:
The F-function splits the 32-bit input into four 8-bit quarters and uses the quarters as input to the S-boxes. The S-boxes accept 8-bit input and produce 32-bit output. The outputs are added modulo 232 and XORed to produce the final 32-bit output (see image in the upper right corner).
After the 16th round, undo the last swap, and XOR L with K18 and R with K17 (output whitening).
Decryption is exactly the same as encryption, except that P1, P2, ..., P18 are used in the reverse order. This is not so obvious because xor is commutative and associative. A common misconception is to use inverse order of encryption as decryption algorithm (i.e. first XORing P17 and P18 to the ciphertext block, then using the P-entries in reverse order).
Blowfish's key schedule starts by initializing the P-array and S-boxes with values derived from the hexadecimal digits of pi, which contain no obvious pattern (see nothing up my sleeve number). The secret key is then, byte by byte, cycling the key if necessary, XORed with all the P-entries in order. A 64-bit all-zero block is then encrypted with the algorithm as it stands. The resultant ciphertext replaces P1 and P2. The same ciphertext is then encrypted again with the new subkeys, and the new ciphertext replaces P3 and P4. This continues, replacing the entire P-array and all the S-box entries. In all, the Blowfish encryption algorithm will run 521 times to generate all the subkeys about 4 KB of data is processed.
Because the P-array is 576 bits long, and the key bytes are XORed through all these 576 bits during the initialization, many implementations support key sizes up to 576 bits. The reason for that is a discrepancy between the original Blowfish description, which uses 448-bit keys, and its reference implementation, which uses 576-bit keys. The test vectors for verifying third-party implementations were also produced with 576-bit keys. When asked which Blowfish version is the correct one, Bruce Schneier answered: "The test vectors should be used to determine the one true Blowfish".
Another opinion is that the 448 bits limit is present to ensure that every bit of every subkey depends on every bit of the key, as the last four values of the P-array don't affect every bit of the ciphertext. This point should be taken in consideration for implementations with a different number of rounds, as even though it increases security against an exhaustive attack, it weakens the security guaranteed by the algorithm. And given the slow initialization of the cipher with each change of key, it is granted a natural protection against brute-force attacks, which doesn't really justify key sizes longer than 448 bits.
Blowfish in pseudocode
uint32_t P[18];
uint32_t S[4][256];
uint32_t f (uint32_t x) {
uint32_t h = S[0][x >> 24] + S[1][x >> 16 & 0xff];
return ( h ^ S[2][x >> 8 & 0xff] ) + S[3][x & 0xff];
}
void blowfish_encrypt(uint32_t *L, uint32_t *R) {
for (short r = 0; r < 16; r++) {
*L = *L ^ P[r];
*R = f(*L) ^ *R;
swap(L, R);
}
swap(L, R);
*R = *R ^ P[16];
*L = *L ^ P[17];
}
void blowfish_decrypt(uint32_t *L, uint32_t *R) {
for (short r = 17; r > 1; r--) {
*L = *L ^ P[r];
*R = f(*L) ^ *R;
swap(L, R);
}
swap(L, R);
*R = *R ^ P[1];
*L = *L ^ P[0];
}
// ...
// initializing the P-array and S-boxes with values derived from pi; omitted in the example (you can find them below)
// ...
{
/* initialize P box w/ key*/
uint32_t k;
for (short i = 0, p = 0; i < 18; i++) {
k = 0x00;
for (short j = 0; j < 4; j++) {
k = (k << 8) | (uint8_t) key[p];
p = (p + 1) % key_len;
}
P[i] ^= k;
}
/* blowfish key expansion (521 iterations) */
uint32_t l = 0x00, r = 0x00;
for (short i = 0; i < 18; i+=2) {
blowfish_encrypt(&l, &r);
P[i] = l;
P[i+1] = r;
}
for (short i = 0; i < 4; i++) {
for (short j = 0; j < 256; j+=2) {
blowfish_encrypt(&l, &r);
S[i][j] = l;
S[i][j+1] = r;
}
}
}
Blowfish in practice
Blowfish is a fast block cipher, except when changing keys. Each new key requires the pre-processing equivalent of encrypting about 4 kilobytes of text, which is very slow compared to other block ciphers. This prevents its use in certain applications, but is not a problem in others.
In one application Blowfish's slow key changing is actually a benefit: the password-hashing method (crypt $2, i.e. bcrypt) used in OpenBSD uses an algorithm derived from Blowfish that makes use of the slow key schedule; the idea is that the extra computational effort required gives protection against dictionary attacks. See key stretching.
Blowfish has a memory footprint of just over 4 kilobytes of RAM. This constraint is not a problem even for older desktop and laptop computers, though it does prevent use in the smallest embedded systems such as early smartcards.
Blowfish was one of the first secure block ciphers not subject to any patents and therefore freely available for anyone to use. This benefit has contributed to its popularity in cryptographic software.
bcrypt is a password hashing function which, combined with a variable number of iterations (work "cost"), exploits the expensive key setup phase of Blowfish to increase the workload and duration of hash calculations, further reducing threats from brute force attacks.
bcrypt is also the name of a cross-platform file encryption utility developed in 2002 that implements Blowfish.
Weakness and successors
Blowfish's use of a 64-bit block size (as opposed to e.g. AES's 128-bit block size) makes it vulnerable to birthday attacks, particularly in contexts like HTTPS. In 2016, the SWEET32 attack demonstrated how to leverage birthday attacks to perform plaintext recovery (i.e. decrypting ciphertext) against ciphers with a 64-bit block size. The GnuPG project recommends that Blowfish not be used to encrypt files larger than 4 GB due to its small block size.
A reduced-round variant of Blowfish is known to be susceptible to known-plaintext attacks on reflectively weak keys. Blowfish implementations use 16 rounds of encryption, and are not susceptible to this attack.
Bruce Schneier has recommended migrating to his Blowfish successor, Twofish.
See also
Twofish
Threefish
MacGuffin
References
External links
Feistel ciphers
Free ciphers
Articles with example pseudocode |
4524 | https://en.wikipedia.org/wiki/Burroughs%20Corporation | Burroughs Corporation | The Burroughs Corporation was a major American manufacturer of business equipment. The company was founded in 1886 as the American Arithmometer Company. In 1986, it merged with Sperry UNIVAC to form Unisys. The company's history paralleled many of the major developments in computing. At its start, it produced mechanical adding machines, and later moved into programmable ledgers and then computers. It was one of the largest producers of mainframe computers in the world, also producing related equipment including typewriters and printers.
Early history
In 1886, the American Arithmometer Company was established in St. Louis, Missouri, to produce and sell an adding machine invented by William Seward Burroughs (grandfather of Beat Generation author William S. Burroughs). In 1904, six years after Burroughs' death, the company moved to Detroit and changed its name to the Burroughs Adding Machine Company. It was soon the biggest adding machine company in America.
Evolving product lines
The adding machine range began with the basic, hand-cranked P100 which was only capable of adding. The design included some revolutionary features, foremost of which was the dashpot which governed the speed at which the operating lever could be pulled so allowing the mechanism to operate consistently correctly. The machine also had a full-keyboard with a separate column of keys 1 to 9 for each decade where the keys latch when pressed, with interlocking which prevented more than one key in any decade from being latched. The latching allowed the operator to quickly check that the correct number had been entered before pulling the operating lever. The numbers entered and the final total were printed on a roll of paper at the rear, so there was no danger of the operator writing down the wrong answer and there was a copy of the calculation which could be checked later if necessary. The P200 offered a subtraction capability and the P300 provided a means of keeping two separate totals. The P400 provided a moveable carriage, and the P600 and top-of-the-range P612 offered some limited programmability based upon the position of the carriage. The range was further extended by the inclusion of the "J" series which provided a single finger calculation facility, and the "c" series of both manual and electrical assisted comptometers. In the late 1960s, the Burroughs sponsored "nixi-tube" provided an electronic display calculator. Burroughs developed a range of adding machines with different capabilities, gradually increasing in their capabilities. A revolutionary adding machine was the Sensimatic, which was able to perform many business functions semi-automatically. It had a moving programmable carriage to maintain ledgers. It could store 9, 18 or 27 balances during the ledger posting operations and worked with a mechanical adder named a Crossfooter. The Sensimatic developed into the Sensitronic which could store balances on a magnetic stripe which was part of the ledger card. This balance was read into the accumulator when the card was inserted into the carriage. The Sensitronic was followed by the E1000, E2000, E3000, E4000, E6000 and the E8000, which were computer systems supporting card reader/punches and a line printer.
Later, Burroughs was selling more than adding machines, including typewriters.
Move into computers
The biggest shift in company history came in 1953: the Burroughs Adding Machine Company was renamed the Burroughs Corporation and began moving into digital computer products, initially for banking institutions. This move began with Burroughs' purchase in June 1956, of the ElectroData Corporation in Pasadena, California, a spinoff of the Consolidated Engineering Corporation which had designed test instruments and had a cooperative relationship with Caltech in Pasadena. ElectroData had built the Datatron 205 and was working on the Datatron 220. The first major computer product that came from this marriage was the B205 tube computer. In the late 1960s the L and TC series range was produced (e.g. the TC500—Terminal Computer 500) which had a golf ball printer and in the beginning a 1K (64 bit) disk memory. These were popular as branch terminals to the B5500/6500/6700 systems, and sold well in the banking sector, where they were often connected to non-Burroughs mainframes. In conjunction with these products, Burroughs also manufactured an extensive range of cheque processing equipment, normally attached as terminals to a larger system such as a B2700 or B1700.
In the 1950s, Burroughs worked with the Federal Reserve Bank on the development and computer processing of magnetic ink character recognition (MICR) especially for the processing of bank cheques. Burroughs made special MICR/OCR sorter/readers which attached to their medium systems line of computers (2700/3700/4700) and this entrenched the company in the computer side of the banking industry.
A force in the computing industry
Burroughs was one of the nine major United States computer companies in the 1960s, with IBM the largest, Honeywell, NCR Corporation, Control Data Corporation (CDC), General Electric (GE), Digital Equipment Corporation (DEC), RCA and Sperry Rand (UNIVAC line). In terms of sales, Burroughs was always a distant second to IBM. In fact, IBM's market share was so much larger than all of the others that this group was often referred to as "IBM and the Seven Dwarves." By 1972 when GE and RCA were no longer in the mainframe business, the remaining five companies behind IBM became known as the BUNCH, an acronym based on their initials.
At the same time, Burroughs was very much a competitor. Like IBM, Burroughs tried to supply a complete line of products for its customers, including Burroughs-designed printers, disk drives, tape drives, computer printing paper, and even typewriter ribbons.
Developments and innovations
The Burroughs Corporation developed three highly innovative architectures, based on the design philosophy of "language-directed design". Their machine instruction sets favored one or many high level programming languages, such as ALGOL, COBOL or FORTRAN. All three architectures were considered mainframe class machines:
The Burroughs large systems machines started with the B5000 in 1961. The B5500 came a few years later when large rotating disks replaced drums as the main external memory media. These B5000 Series systems used the world's first virtual memory multi-programming operating system. They were followed by the B6500/B6700 in the later 1960s, the B7700 in the mid 1970s, and the A series in the 1980s. The underlying architecture of these machines is similar and continues today as the Unisys ClearPath MCP line of computers: stack machines designed to be programmed in an extended Algol 60. Their operating systems, called MCP (Master Control Program—the name later borrowed by the screenwriters for Tron), were programmed in ESPOL (Executive Systems Programming Oriented Language, a minor extension of ALGOL), and later in NEWP (with further extensions to ALGOL) almost a decade before Unix. The command interface developed into a compiled structured language with declarations, statements and procedures called WFL (Work Flow Language).
Many computer scientists consider these series of computers to be technologically groundbreaking. Stack oriented processors, with 48 bit word length where each word was defined as data or program contributed significantly to a secure operating environment, long before spyware and viruses affected computing. And the modularity of these large systems was also unique: multiple CPUs, multiple memory modules and multiple I/O and Data Comm processors permitted incremental and cost effective growth of system performance and reliability.
In industries like banking, where continuous operations was mandatory, Burroughs large systems penetrated most every large bank, including the Federal Reserve Bank. Burroughs built the backbone switching systems for Society for Worldwide Interbank Financial Telecommunication (SWIFT) which sent its first message in 1977. Unisys is still the provider to SWIFT today.
Burroughs produced the B2500 or "medium systems" computers aimed primarily at the business world. The machines were designed to execute COBOL efficiently. This included a BCD (Binary Coded Decimal) based arithmetic unit, storing and addressing the main memory using base 10 numbering instead of binary. The designation for these systems was Burroughs B2500 through B49xx, followed by Unisys V-Series V340 through V560.
Burroughs produced the B1700 or "small systems" computers that were designed to be microprogrammed, with each process potentially getting its own virtual machine designed to be the best match to the programming language chosen for the program being run.
The smallest general-purpose computers were the B700 "microprocessors" which were used both as stand-alone systems and as special-purpose data-communications or disk-subsystem controllers.
Burroughs also manufactured an extensive range of accounting machines including both stand-alone systems such as the Sensimatic, L500 and B80, and dedicated terminals including the TC500 and specialised check processing equipment.
In 1982, Burroughs began producing personal computers, the B20 and B25 lines with the Intel 8086/8088 family of 16-bit chips as the processor. These ran the BTOS operating system, which Burroughs licensed from Convergent Technologies. These machines implemented an early Local Area Network to share a hard disk between workgroup users. These microcomputers were later manufactured in Kunming, China for use in China under agreement with Burroughs.
Burroughs collaborated with University of Illinois on a multiprocessor architecture developing the ILLIAC IV computer in the early 1960s. The ILLIAC had up to 128 parallel processors while the B6700 & B7700 only accommodated a total of 7 CPUs and/or I/O units (the 8th unit was the memory tester).
Burroughs made military computers, such as the D825 (the "D" prefix signifying it was for defense industrial use), in its Great Valley Laboratory in Paoli, Pennsylvania. The D825 was, according to some scholars, the first true multiprocessor computer. Paoli was also home to the Defense and Space Group Marketing Division.
In 1964 Burroughs had also completed the D830 which was another variation of the D825 designed specifically for real-time applications, such as airline reservations. Burroughs designated it the B8300 after Trans World Airlines (TWA) ordered one in September 1965. A system with three instruction processors was installed at TWA's reservations center in Rockleigh, New Jersey in 1968. The system, which was called George, with an application programmed in JOVIAL, was intended to support some 4000 terminals, but the system experienced repeated crashes due to a filing system disk allocation error when operating under a large load. A fourth processor was added but did nothing to resolve the problem. The problem was resolved in late 1970 and the system became stable. Unfortunately, the decision to cancel the project was being made at the very time that the problem was resolved. TWA cancelled the project and acquired one IBM System/360 Model 75, two IBM System/360 model 65s, and IBM's PARS software for its reservations system. TWA sued Burroughs for non-fulfillment of the contract, but Burroughs counter-sued, stating that the basic system did work and that the problems were in TWA's applications software. The two companies reached an out-of-court settlement.
Burroughs developed a half-size version of the D825 called the D82, cutting the word size from 48 to 24 bits and simplifying the computer's instruction set. The D82 could have up to 32,768 words of core memory and continued the use of separate instruction and I/O processors. Burroughs sold a D82 to Air Canada to handle reservations for trips originating in Montreal and Quebec. This design was further refined and made much more compact as the D84 machine which was completed in 1965. A D84 processor/memory unit with 4096 words of memory occupied just . This system was used successfully in two military projects: field test systems used to check the electronics of the Air Force General Dynamics F-111 Aardvark fighter plane and systems used to control the countdown and launch of the Army's Pershing 1 and 1a missile systems.
Merger with Sperry
In September 1986, Burroughs Corporation merged with Sperry Corporation to form Unisys. For a time, the combined company retained the Burroughs processors as the A- and V-systems lines. However, as the market for large systems shifted from proprietary architectures to common servers, the company eventually dropped the V-Series line, although customers continued to use V-series systems . Unisys continues to develop and market the A-Series, now known as ClearPath.
Reemergence of the Burroughs name
In 2010, Unisys sold off its Payment Systems Division to Marlin Equity Partners, a California-based private investment firm, which incorporated it as Burroughs Payment Systems based in Plymouth, Michigan.
References in popular culture
Burroughs B205 hardware has appeared as props in many Hollywood television and film productions from the late 1950s. For example, a B205 console was often shown in the television series Batman as the Bat Computer; also as the computer in Lost in Space. B205 tape drives were often seen in series such as The Time Tunnel and Voyage to the Bottom of the Sea. Craig Ferguson, American talk show host, comedian and actor was a Burroughs apprentice in Cumbernauld, Scotland.
References
Further reading
Allweiss, Jack A., "Evolution of Burroughs Stack Architecture - Mainframe Computers", 2010
Barton, Robert S. "A New Approach to the Functional Design of a Digital Computer" Proc. western joint computer Conf. ACM (1961).
Hauck, E.A., Dent, Ben A. "Burroughs B6500/B7500 Stack Mechanism", SJCC (1968) pp. 245–251.
Martin, Ian L. (2012) "Too far ahead of its time: Barclays, Burroughs and real-time banking", IEEE Annals of the History of Computing 34(2), pp. 5–19. . (Draft version)
Mayer, Alastair J.W., "The Architecture of the Burroughs B5000 - 20 Years Later and Still Ahead of the Times?", ACM Computer Architecture News, 1982 (archived at the Southwest Museum of Engineering, Communications and Computation. Glendale, Arizona)
McKeeman, William M. "Language Directed Computer Design", FJCC (1967) pp. 413–417.
Morgan, Bryan, "Total to Date: The Evolution of the Adding Machine: The Story of Burroughs", Burroughs Adding Machine Limited London, 1953.
Organick, Elliot I. "Computer System Organization The B5700/B6700 series", Academic Press (1973)
Wilner, Wayne T. "Design of the B1700", FJCC pp. 489–497 (1972).
Wilner, Wayne T., "B1700 Design and Implementation", Burroughs Corporation, Santa Barbara Plant, Goleta, California, May 1972.
External links
Burroughs Corporation Records Charles Babbage Institute University of Minnesota, Minneapolis. Collection contains the records of the Burroughs Corporation, and its predecessors the American Arithmometer Company and Burroughs Adding Machine Company. Materials include corporate records, photographs, films and video tapes, scrapbooks, papers of employees and the records of companies acquired by Burroughs. CBI's Burroughs Corporation Records includes over 100,000 photographs depicting the entire visual history of Burroughs from its origin as the American Arithmometer Corporation in 1886 to its merger with the Sperry Corporation to form the Unisys Corporation in 1986.
Burroughs Corporation Photo Database at the Charles Babbage Institute University of Minnesota. The searchable photo database permits browsing and retrieval of over 550 historical images.
"Burroughs B 5000 Conference, OH 98", Oral history on 6 September 1985, Marina del Ray, California. Charles Babbage Institute, University of Minnesota, Minneapolis. The Burroughs 5000 computer series is discussed by individuals responsible for its development and marketing from 1957 through the 1960s in a 1985 conference sponsored by AFIPS and Burroughs Corporation.
Oral history interview with Isaac Levin Auerbach Charles Babbage Institute University of Minnesota. Auerbach discusses his work at Burroughs 1949–1957 managing development for the SAGE project, BEAM I computer, the Intercontinental Ballistic Missile System, a magnetic core encryption communications system, and Atlas missile.
Oral history interview with Robert V. D. Campbell. Discusses his work at Burroughs (1949–1966) as director of research and in program planning.
Oral history interview with Alfred Doughty Cavanaugh Cavanaugh discusses the work of his grandfather, A. J. Doughty, with William Seward Burroughs and the Burroughs Adding Machine Company.
Oral history interview with Carel Sellenraad Charles Babbage Institute University of Minnesota. Sellenraad describes his long association with Burroughs Adding Machine Company, and the impact of World Wars I & II on the sales and service of calculators, and adding and bookkeeping machines in Europe.
Oral history interview with Ovid M. Smith Charles Babbage Institute University of Minnesota. Smith reviews his 46½ year career at Burroughs Adding Machine Company (later Burroughs Corporation).
"Early Burroughs Machines", University of Virginia's Computer Museum.
Older Burroughs computer manuals online
Burroughs computers such as the D825 at BRL
An historical Burroughs Adding Machine Company/Burroughs site
Unofficial list of Burroughs manufacturing plants and labs
Ian Joyner's Burroughs page
The Burroughs B5900 and E-Mode: A bridge to 21st Century Computing - Jack Allweiss
Defunct computer companies of the United States
Defunct computer hardware companies
Mechanical calculator companies
Unisys
Companies based in St. Louis
Manufacturing companies established in 1886
Manufacturing companies disestablished in 1986
Technology companies disestablished in 1986
1886 establishments in Missouri
1986 establishments in the United States
Defunct companies based in Missouri |
4563 | https://en.wikipedia.org/wiki/Battle%20of%20Jutland | Battle of Jutland | The Battle of Jutland (, the Battle of Skagerrak) was a naval battle fought between Britain's Royal Navy Grand Fleet, under Admiral Sir John Jellicoe, and the Imperial German Navy's High Seas Fleet, under Vice-Admiral Reinhard Scheer, during the First World War. The battle unfolded in extensive manoeuvring and three main engagements (the battlecruiser action, the fleet action and the night action), from 31 May to 1 June 1916, off the North Sea coast of Denmark's Jutland Peninsula. It was the largest naval battle and the only full-scale clash of battleships in that war. Jutland was the third fleet action between steel battleships, following the Battle of the Yellow Sea in 1904 and the decisive Battle of Tsushima in 1905, during the Russo-Japanese War. Jutland was the last major battle in world history fought primarily by battleships. In terms of total ships displaced, it was the largest surface naval battle in history.
Germany's High Seas Fleet intended to lure out, trap, and destroy a portion of the Grand Fleet, as the German naval force was insufficient to openly engage the entire British fleet. This formed part of a larger strategy to break the British blockade of Germany and to allow German naval vessels access to the Atlantic. Meanwhile, Great Britain's Royal Navy pursued a strategy of engaging and destroying the High Seas Fleet, thereby keeping German naval forces contained and away from Britain and her shipping lanes.
The Germans planned to use Vice-Admiral Franz Hipper's fast scouting group of five modern battlecruisers to lure Vice-Admiral Sir David Beatty's battlecruiser squadrons into the path of the main German fleet. They stationed submarines in advance across the likely routes of the British ships. However, the British learned from signal intercepts that a major fleet operation was likely, so on 30 May Jellicoe sailed with the Grand Fleet to rendezvous with Beatty, passing over the locations of the German submarine picket lines while they were unprepared. The German plan had been delayed, causing further problems for their submarines, which had reached the limit of their endurance at sea.
On the afternoon of 31 May, Beatty encountered Hipper's battlecruiser force long before the Germans had expected. In a running battle, Hipper successfully drew the British vanguard into the path of the High Seas Fleet. By the time Beatty sighted the larger force and turned back towards the British main fleet, he had lost two battlecruisers from a force of six battlecruisers and four powerful battleships—though he had sped ahead of his battleships of 5th Battle Squadron earlier in the day, effectively losing them as an integral component for much of this opening action against the five ships commanded by Hipper. Beatty's withdrawal at the sight of the High Seas Fleet, which the British had not known were in the open sea, would reverse the course of the battle by drawing the German fleet in pursuit towards the British Grand Fleet. Between 18:30, when the sun was lowering on the western horizon, back-lighting the German forces, and nightfall at about 20:30, the two fleets—totalling 250 ships between them—directly engaged twice.
Fourteen British and eleven German ships sank, with a total of 9,823 casualties. After sunset, and throughout the night, Jellicoe manoeuvred to cut the Germans off from their base, hoping to continue the battle the next morning, but under the cover of darkness Scheer broke through the British light forces forming the rearguard of the Grand Fleet and returned to port.
Both sides claimed victory. The British lost more ships and twice as many sailors but succeeded in containing the German fleet. The British press criticised the Grand Fleet's failure to force a decisive outcome, while Scheer's plan of destroying a substantial portion of the British fleet also failed. The British strategy of denying Germany access to both the United Kingdom and the Atlantic did succeed, which was the British long-term goal. The Germans' "fleet in being" continued to pose a threat, requiring the British to keep their battleships concentrated in the North Sea, but the battle reinforced the German policy of avoiding all fleet-to-fleet contact. At the end of 1916, after further unsuccessful attempts to reduce the Royal Navy's numerical advantage, the German Navy accepted that its surface ships had been successfully contained, subsequently turning its efforts and resources to unrestricted submarine warfare and the destruction of Allied and neutral shipping, which—along with the Zimmermann Telegram—by April 1917 triggered the United States of America's declaration of war on Germany.
Subsequent reviews commissioned by the Royal Navy generated strong disagreement between supporters of Jellicoe and Beatty concerning the two admirals' performance in the battle. Debate over their performance and the significance of the battle continues to this day.
Background and planning
German planning
With 16 dreadnought-type battleships, compared with the Royal Navy's 28, the German High Seas Fleet stood little chance of winning a head-to-head clash. The Germans therefore adopted a divide-and-conquer strategy. They would stage raids into the North Sea and bombard the English coast, with the aim of luring out small British squadrons and pickets, which could then be destroyed by superior forces or submarines.
In January 1916, Admiral von Pohl, commander of the German fleet, fell ill. He was replaced by Scheer, who believed that the fleet had been used too defensively, had better ships and men than the British, and ought to take the war to them. According to Scheer, the German naval strategy should be:
On 25 April 1916, a decision was made by the German Imperial Admiralty to halt indiscriminate attacks by submarine on merchant shipping. This followed protests from neutral countries, notably the United States, that their nationals had been the victims of attacks. Germany agreed that future attacks would only take place in accord with internationally agreed prize rules, which required an attacker to give a warning and allow the crews of vessels time to escape, and not to attack neutral vessels at all. Scheer believed that it would not be possible to continue attacks on these terms, which took away the advantage of secret approach by submarines and left them vulnerable to even relatively small guns on the target ships. Instead, he set about deploying the submarine fleet against military vessels.
It was hoped that, following a successful German submarine attack, fast British escorts, such as destroyers, would be tied down by anti-submarine operations. If the Germans could catch the British in the expected locations, good prospects were thought to exist of at least partially redressing the balance of forces between the fleets. "After the British sortied in response to the raiding attack force", the Royal Navy's centuries-old instincts for aggressive action could be exploited to draw its weakened units towards the main German fleet under Scheer. The hope was that Scheer would thus be able to ambush a section of the British fleet and destroy it.
Submarine deployments
A plan was devised to station submarines offshore from British naval bases, and then stage some action that would draw out the British ships to the waiting submarines. The battlecruiser had been damaged in a previous engagement, but was due to be repaired by mid May, so an operation was scheduled for 17 May 1916. At the start of May, difficulties with condensers were discovered on ships of the third battleship squadron, so the operation was put back to 23 May. Ten submarines—, , , , , , , , , and —were given orders first to patrol in the central North Sea between 17 and 22 May, and then to take up waiting positions. U-43 and U-44 were stationed in the Pentland Firth, which the Grand Fleet was likely to cross leaving Scapa Flow, while the remainder proceeded to the Firth of Forth, awaiting battlecruisers departing Rosyth. Each boat had an allocated area, within which it could move around as necessary to avoid detection, but was instructed to keep within it. During the initial North Sea patrol the boats were instructed to sail only north–south so that any enemy who chanced to encounter one would believe it was departing or returning from operations on the west coast (which required them to pass around the north of Britain). Once at their final positions, the boats were under strict orders to avoid premature detection that might give away the operation. It was arranged that a coded signal would be transmitted to alert the submarines exactly when the operation commenced: "Take into account the enemy's forces may be putting to sea".
Additionally, UB-27 was sent out on 20 May with instructions to work its way into the Firth of Forth past May Island. U-46 was ordered to patrol the coast of Sunderland, which had been chosen for the diversionary attack, but because of engine problems it was unable to leave port and U-47 was diverted to this task. On 13 May, U-72 was sent to lay mines in the Firth of Forth; on the 23rd, U-74 departed to lay mines in the Moray Firth; and on the 24th, U-75 was dispatched similarly west of the Orkney Islands. UB-21 and UB-22 were sent to patrol the Humber, where (incorrect) reports had suggested the presence of British warships. U-22, U-46 and U-67 were positioned north of Terschelling to protect against intervention by British light forces stationed at Harwich.
On 22 May 1916, it was discovered that Seydlitz was still not watertight after repairs and would not now be ready until the 29th. The ambush submarines were now on station and experiencing difficulties of their own: visibility near the coast was frequently poor due to fog, and sea conditions were either so calm the slightest ripple, as from the periscope, could give away their position, or so rough as to make it very hard to keep the vessel at a steady depth. The British had become aware of unusual submarine activity, and had begun counter patrols that forced the submarines out of position. UB-27 passed Bell Rock on the night of 23 May on its way into the Firth of Forth as planned, but was halted by engine trouble. After repairs it continued to approach, following behind merchant vessels, and reached Largo Bay on 25 May. There the boat became entangled in nets that fouled one of the propellers, forcing it to abandon the operation and return home. U-74 was detected by four armed trawlers on 27 May and sunk south-east of Peterhead. U-75 laid its mines off the Orkney Islands, which, although they played no part in the battle, were responsible later for sinking the cruiser carrying Lord Kitchener (head of the army) on a mission to Russia on 5 June. U-72 was forced to abandon its mission without laying any mines when an oil leak meant it was leaving a visible surface trail astern.
Zeppelins
The Germans maintained a fleet of Zeppelins that they used for aerial reconnaissance and occasional bombing raids. The planned raid on Sunderland intended to use Zeppelins to watch out for the British fleet approaching from the north, which might otherwise surprise the raiders.
By 28 May, strong north-easterly winds meant that it would not be possible to send out the Zeppelins, so the raid again had to be postponed. The submarines could only stay on station until 1 June before their supplies would be exhausted and they had to return, so a decision had to be made quickly about the raid.
It was decided to use an alternative plan, abandoning the attack on Sunderland but instead sending a patrol of battlecruisers to the Skagerrak, where it was likely they would encounter merchant ships carrying British cargo and British cruiser patrols. It was felt this could be done without air support, because the action would now be much closer to Germany, relying instead on cruiser and torpedo boat patrols for reconnaissance.
Orders for the alternative plan were issued on 28 May, although it was still hoped that last-minute improvements in the weather would allow the original plan to go ahead. The German fleet assembled in the Jade River and at Wilhelmshaven and was instructed to raise steam and be ready for action from midnight on 28 May.
By 14:00 on 30 May, the wind was still too strong and the final decision was made to use the alternative plan. The coded signal "31 May G.G.2490" was transmitted to the ships of the fleet to inform them the Skagerrak attack would start on 31 May. The pre-arranged signal to the waiting submarines was transmitted throughout the day from the E-Dienst radio station at Bruges, and the U-boat tender Arcona anchored at Emden. Only two of the waiting submarines, U-66 and U-32, received the order.
British response
Unfortunately for the German plan, the British had obtained a copy of the main German codebook from the light cruiser , which had been boarded by the Russian Navy after the ship ran aground in Russian territorial waters in 1914. German naval radio communications could therefore often be quickly deciphered, and the British Admiralty usually knew about German activities.
The British Admiralty's Room 40 maintained direction finding and interception of German naval signals. It had intercepted and decrypted a German signal on 28 May that provided "ample evidence that the German fleet was stirring in the North Sea". Further signals were intercepted, and although they were not decrypted it was clear that a major operation was likely. At 11:00 on 30 May, Jellicoe was warned that the German fleet seemed prepared to sail the following morning. By 17:00, the Admiralty had intercepted the signal from Scheer, "31 May G.G.2490", making it clear something significant was imminent.
Not knowing the Germans' objective, Jellicoe and his staff decided to position the fleet to head off any attempt by the Germans to enter the North Atlantic or the Baltic through the Skagerrak, by taking up a position off Norway where they could potentially cut off any German raid into the shipping lanes of the Atlantic or prevent the Germans from heading into the Baltic. A position further west was unnecessary, as that area of the North Sea could be patrolled by air using blimps and scouting aircraft.
Consequently, Admiral Jellicoe led the sixteen dreadnought battleships of the 1st and 4th Battle Squadrons of the Grand Fleet and three battlecruisers of the 3rd Battlecruiser Squadron eastwards out of Scapa Flow at 22:30 on 30 May. He was to meet the 2nd Battle Squadron of eight dreadnought battleships commanded by Vice-Admiral Martyn Jerram coming from Cromarty. Beatty's force of six ships of the 1st and 2nd Battlecruiser Squadrons plus the 5th Battle Squadron of four fast battleships left the Firth of Forth at around the same time; Jellicoe intended to rendezvous with him west of the mouth of the Skagerrak off the coast of Jutland and wait for the Germans to appear or for their intentions to become clear. The planned position would give him the widest range of responses to likely German moves. Hipper's raiding force did not leave the Outer Jade Roads until 01:00 on 31 May, heading west of Heligoland Island following a cleared channel through the minefields, heading north at . The main German fleet of sixteen dreadnought battleships of 1st and 3rd Battle Squadrons left the Jade at 02:30, being joined off Heligoland at 04:00 by the six pre-dreadnoughts of the 2nd Battle Squadron coming from the Elbe River.
Naval tactics in 1916
The principle of concentration of force was fundamental to the fleet tactics of this time (as in earlier periods). Tactical doctrine called for a fleet approaching battle to be in a compact formation of parallel columns, allowing relatively easy manoeuvring, and giving shortened sight lines within the formation, which simplified the passing of the signals necessary for command and control.
A fleet formed in several short columns could change its heading faster than one formed in a single long column. Since most command signals were made with flags or signal lamps between ships, the flagship was usually placed at the head of the centre column so that its signals might be more easily seen by the many ships of the formation. Wireless telegraphy was in use, though security (radio direction finding), encryption, and the limitation of the radio sets made their extensive use more problematic. Command and control of such huge fleets remained difficult.
Thus, it might take a very long time for a signal from the flagship to be relayed to the entire formation. It was usually necessary for a signal to be confirmed by each ship before it could be relayed to other ships, and an order for a fleet movement would have to be received and acknowledged by every ship before it could be executed. In a large single-column formation, a signal could take 10 minutes or more to be passed from one end of the line to the other, whereas in a formation of parallel columns, visibility across the diagonals was often better (and always shorter) than in a single long column, and the diagonals gave signal "redundancy", increasing the probability that a message would be quickly seen and correctly interpreted.
However, before battle was joined the heavy units of the fleet would, if possible, deploy into a single column. To form the battle line in the correct orientation relative to the enemy, the commanding admiral had to know the enemy fleet's distance, bearing, heading, and speed. It was the task of the scouting forces, consisting primarily of battlecruisers and cruisers, to find the enemy and report this information in sufficient time, and, if possible, to deny the enemy's scouting forces the opportunity of obtaining the equivalent information.
Ideally, the battle line would cross the intended path of the enemy column so that the maximum number of guns could be brought to bear, while the enemy could fire only with the forward guns of the leading ships, a manoeuvre known as "crossing the T". Admiral Tōgō, commander of the Japanese battleship fleet, had achieved this against Admiral Zinovy Rozhestvensky's Russian battleships in 1905 at the Battle of Tsushima, with devastating results. Jellicoe achieved this twice in one hour against the High Seas Fleet at Jutland, but on both occasions, Scheer managed to turn away and disengage, thereby avoiding a decisive action.
Ship design
Within the existing technological limits, a trade-off had to be made between the weight and size of guns, the weight of armour protecting the ship, and the maximum speed. Battleships sacrificed speed for armour and heavy naval guns ( or larger). British battlecruisers sacrificed weight of armour for greater speed, while their German counterparts were armed with lighter guns and heavier armour. These weight savings allowed them to escape danger or catch other ships. Generally, the larger guns mounted on British ships allowed an engagement at greater range. In theory, a lightly armoured ship could stay out of range of a slower opponent while still scoring hits. The fast pace of development in the pre-war years meant that every few years, a new generation of ships rendered its predecessors obsolete. Thus, fairly young ships could still be obsolete compared with the newest ships, and fare badly in an engagement against them.
Admiral John Fisher, responsible for reconstruction of the British fleet in the pre-war period, favoured large guns, oil fuel, and speed. Admiral Tirpitz, responsible for the German fleet, favoured ship survivability and chose to sacrifice some gun size for improved armour. The German battlecruiser had belt armour equivalent in thickness—though not as comprehensive—to the British battleship , significantly better than on the British battlecruisers such as Tiger. German ships had better internal subdivision and had fewer doors and other weak points in their bulkheads, but with the disadvantage that space for crew was greatly reduced. As they were designed only for sorties in the North Sea they did not need to be as habitable as the British vessels and their crews could live in barracks ashore when in harbour.
Order of battle
Warships of the period were armed with guns firing projectiles of varying weights, bearing high explosive warheads. The sum total of weight of all the projectiles fired by all the ship's broadside guns is referred to as "weight of broadside". At Jutland, the total of the British ships' weight of broadside was , while the German fleet's total was . This does not take into consideration the ability of some ships and their crews to fire more or less rapidly than others, which would increase or decrease amount of fire that one combatant was able to bring to bear on their opponent for any length of time.
Jellicoe's Grand Fleet was split into two sections. The dreadnought Battle Fleet, with which he sailed, formed the main force and was composed of 24 battleships and three battlecruisers. The battleships were formed into three squadrons of eight ships, further subdivided into divisions of four, each led by a flag officer. Accompanying them were eight armoured cruisers (classified by the Royal Navy since 1913 as "cruisers"), eight light cruisers, four scout cruisers, 51 destroyers, and one destroyer-minelayer.
The Grand Fleet sailed without three of its battleships: in refit at Invergordon, dry-docked at Rosyth and in refit at Devonport. The brand new was left behind; with only three weeks in service, her untrained crew was judged unready for battle.
British reconnaissance was provided by the Battlecruiser Fleet under David Beatty: six battlecruisers, four fast s, 14 light cruisers and 27 destroyers. Air scouting was provided by the attachment of the seaplane tender , one of the first aircraft carriers in history to participate in a naval engagement.
The German High Seas Fleet under Scheer was also split into a main force and a separate reconnaissance force. Scheer's main battle fleet was composed of 16 battleships and six pre-dreadnought battleships arranged in an identical manner to the British. With them were six light cruisers and 31 torpedo-boats, (the latter being roughly equivalent to a British destroyer).
The German scouting force, commanded by Franz Hipper, consisted of five battlecruisers, five light cruisers and 30 torpedo-boats. The Germans had no equivalent to Engadine and no heavier-than-air aircraft to operate with the fleet but had the Imperial German Naval Airship Service's force of rigid airships available to patrol the North Sea.
All of the battleships and battlecruisers on both sides carried torpedoes of various sizes, as did the lighter craft. The British battleships carried three or four underwater torpedo tubes. The battlecruisers carried from two to five. All were either 18-inch or 21-inch diameter. The German battleships carried five or six underwater torpedo tubes in three sizes from 18 to 21 inch and the battlecruisers carried four or five tubes.
The German battle fleet was hampered by the slow speed and relatively poor armament of the six pre-dreadnoughts of II Squadron, which limited maximum fleet speed to , compared to maximum British fleet speed of . On the British side, the eight armoured cruisers were deficient in both speed and armour protection. Both of these obsolete squadrons were notably vulnerable to attacks by more modern enemy ships.
Battlecruiser action
The route of the British battlecruiser fleet took it through the patrol sector allocated to U-32. After receiving the order to commence the operation, the U-boat moved to a position east of the Isle of May at dawn on 31 May. At 03:40, it sighted the cruisers and leaving the Forth at . It launched one torpedo at the leading cruiser at a range of , but its periscope jammed 'up', giving away the position of the submarine as it manoeuvred to fire a second. The lead cruiser turned away to dodge the torpedo, while the second turned towards the submarine, attempting to ram. U-32 crash dived, and on raising its periscope at 04:10 saw two battlecruisers (the 2nd Battlecruiser Squadron) heading south-east. They were too far away to attack, but Kapitänleutnant von Spiegel reported the sighting of two battleships and two cruisers to Germany.
U-66 was also supposed to be patrolling off the Firth of Forth but had been forced north to a position off Peterhead by patrolling British vessels. This now brought it into contact with the 2nd Battle Squadron, coming from the Moray Firth. At 05:00, it had to crash dive when the cruiser appeared from the mist heading toward it. It was followed by another cruiser, , and eight battleships. U-66 got within of the battleships preparing to fire, but was forced to dive by an approaching destroyer and missed the opportunity. At 06:35, it reported eight battleships and cruisers heading north.
The courses reported by both submarines were incorrect, because they reflected one leg of a zigzag being used by British ships to avoid submarines. Taken with a wireless intercept of more ships leaving Scapa Flow earlier in the night, they created the impression in the German High Command that the British fleet, whatever it was doing, was split into separate sections moving apart, which was precisely as the Germans wished to meet it.
Jellicoe's ships proceeded to their rendezvous undamaged and undiscovered. However, he was now misled by an Admiralty intelligence report advising that the German main battle fleet was still in port. The Director of Operations Division, Rear Admiral Thomas Jackson, had asked the intelligence division, Room 40, for the current location of German call sign DK, used by Admiral Scheer. They had replied that it was currently transmitting from Wilhelmshaven. It was known to the intelligence staff that Scheer deliberately used a different call sign when at sea, but no one asked for this information or explained the reason behind the query – to locate the German fleet.
The German battlecruisers cleared the minefields surrounding the Amrum swept channel by 09:00. They then proceeded north-west, passing west of the Horn's Reef lightship heading for the Little Fisher Bank at the mouth of the Skagerrak. The High Seas Fleet followed some behind. The battlecruisers were in line ahead, with the four cruisers of the II scouting group plus supporting torpedo boats ranged in an arc ahead and to either side. The IX torpedo boat flotilla formed close support immediately surrounding the battlecruisers. The High Seas Fleet similarly adopted a line-ahead formation, with close screening by torpedo boats to either side and a further screen of five cruisers surrounding the column away. The wind had finally moderated so that Zeppelins could be used, and by 11:30 five had been sent out: L14 to the Skagerrak, L23 east of Noss Head in the Pentland Firth, L21 off Peterhead, L9 off Sunderland, and L16 east of Flamborough Head. Visibility, however, was still bad, with clouds down to .
Contact
By around 14:00, Beatty's ships were proceeding eastward at roughly the same latitude as Hipper's squadron, which was heading north. Had the courses remained unchanged, Beatty would have passed between the two German fleets, south of the battlecruisers and north of the High Seas Fleet at around 16:30, possibly trapping his ships just as the German plan envisioned. His orders were to stop his scouting patrol when he reached a point east of Britain and then turn north to meet Jellicoe, which he did at this time. Beatty's ships were divided into three columns, with the two battlecruiser squadrons leading in parallel lines apart. The 5th Battle Squadron was stationed to the north-west, on the side furthest away from any expected enemy contact, while a screen of cruisers and destroyers was spread south-east of the battlecruisers. After the turn, the 5th Battle Squadron was now leading the British ships in the westernmost column, and Beatty's squadron was centre and rearmost, with the 2nd BCS to the west.
At 14:20 on 31 May, despite heavy haze and scuds of fog giving poor visibility, scouts from Beatty's force reported enemy ships to the south-east; the British light units, investigating a neutral Danish steamer (N J Fjord), which was stopped between the two fleets, had found two German destroyers engaged on the same mission ( and ). The first shots of the battle were fired at 14:28 when Galatea and Phaeton of the British 1st Light Cruiser Squadron opened on the German torpedo boats, which withdrew toward their approaching light cruisers. At 14:36, the Germans scored the first hit of the battle when , of Rear-Admiral Friedrich Boedicker's Scouting Group II, hit her British counterpart Galatea at extreme range.
Beatty began to move his battlecruisers and supporting forces south-eastwards and then east to cut the German ships off from their base and ordered Engadine to launch a seaplane to try to get more information about the size and location of the German forces. This was the first time in history that a carrier-based aeroplane was used for reconnaissance in naval combat. Engadines aircraft did locate and report some German light cruisers just before 15:30 and came under anti-aircraft gunfire but attempts to relay reports from the aeroplane failed.
Unfortunately for Beatty, his initial course changes at 14:32 were not received by Sir Hugh Evan-Thomas's 5th Battle Squadron (the distance being too great to read his flags), because the battlecruiser —the last ship in his column—was no longer in a position where she could relay signals by searchlight to Evan-Thomas, as she had previously been ordered to do. Whereas before the north turn, Tiger had been the closest ship to Evan-Thomas, she was now further away than Beatty in Lion. Matters were aggravated because Evan-Thomas had not been briefed regarding standing orders within Beatty's squadron, as his squadron normally operated with the Grand Fleet. Fleet ships were expected to obey movement orders precisely and not deviate from them. Beatty's standing instructions expected his officers to use their initiative and keep station with the flagship. As a result, the four Queen Elizabeth-class battleships—which were the fastest and most heavily armed in the world at that time—remained on the previous course for several minutes, ending up behind rather than five. Beatty also had the opportunity during the previous hours to concentrate his forces, and no reason not to do so, whereas he steamed ahead at full speed, faster than the battleships could manage. Dividing the force had serious consequences for the British, costing them what would have been an overwhelming advantage in ships and firepower during the first half-hour of the coming battle.
With visibility favouring the Germans, Hipper's battlecruisers at 15:22, steaming approximately north-west, sighted Beatty's squadron at a range of about , while Beatty's forces did not identify Hipper's battlecruisers until 15:30. (position 1 on map). At 15:45, Hipper turned south-east to lead Beatty toward Scheer, who was south-east with the main force of the High Seas Fleet.
Run to the south
Beatty's conduct during the next 15 minutes has received a great deal of criticism, as his ships out-ranged and outnumbered the German squadron, yet he held his fire for over 10 minutes with the German ships in range. He also failed to use the time available to rearrange his battlecruisers into a fighting formation, with the result that they were still manoeuvring when the battle started.
At 15:48, with the opposing forces roughly parallel at , with the British to the south-west of the Germans (i.e., on the right side), Hipper opened fire, followed by the British ships as their guns came to bear upon targets (position 2). Thus began the opening phase of the battlecruiser action, known as the Run to the South, in which the British chased the Germans, and Hipper intentionally led Beatty toward Scheer. During the first minutes of the ensuing battle, all the British ships except Princess Royal fired far over their German opponents, due to adverse visibility conditions, before finally getting the range. Only Lion and Princess Royal had settled into formation, so the other four ships were hampered in aiming by their own turning. Beatty was to windward of Hipper, and therefore funnel and gun smoke from his own ships tended to obscure his targets, while Hipper's smoke blew clear. Also, the eastern sky was overcast and the grey German ships were indistinct and difficult to range.
Beatty had ordered his ships to engage in a line, one British ship engaging with one German and his flagship doubling on the German flagship . However, due to another mistake with signalling by flag, and possibly because Queen Mary and Tiger were unable to see the German lead ship because of smoke, the second German ship, Derfflinger, was left un-engaged and free to fire without disruption. drew fire from two of Beatty's battlecruisers, but still fired with great accuracy during this time, hitting Tiger 9 times in the first 12 minutes. The Germans drew first blood. Aided by superior visibility, Hipper's five battlecruisers quickly registered hits on three of the six British battlecruisers. Seven minutes passed before the British managed to score their first hit.
The first near-kill of the Run to the South occurred at 16:00, when a shell from Lützow wrecked the "Q" turret amidships on Beatty's flagship Lion. Dozens of crewmen were instantly killed, but far larger destruction was averted when the mortally wounded turret commander – Major Francis Harvey of the Royal Marines – promptly ordered the magazine doors shut and the magazine flooded. This prevented a magazine explosion at 16:28, when a flash fire ignited ready cordite charges beneath the turret and killed everyone in the chambers outside "Q" magazine. Lion was saved. was not so lucky; at 16:02, just 14 minutes into the gunnery exchange, she was hit aft by three shells from , causing damage sufficient to knock her out of line and detonating "X" magazine aft. Soon after, despite the near-maximum range, Von der Tann put another shell on Indefatigables "A" turret forward. The plunging shells probably pierced the thin upper armour, and seconds later Indefatigable was ripped apart by another magazine explosion, sinking immediately with her crew of 1,019 officers and men, leaving only two survivors. (position 3).
Hipper's position deteriorated somewhat by 16:15 as the 5th Battle Squadron finally came into range, so that he had to contend with gunfire from the four battleships astern as well as Beatty's five remaining battlecruisers to starboard. But he knew his baiting mission was close to completion, as his force was rapidly closing with Scheer's main body. At 16:08, the lead battleship of the 5th Battle Squadron, , caught up with Hipper and opened fire at extreme range, scoring a hit on Von der Tann within 60 seconds. Still, it was 16:15 before all the battleships of the 5th were able to fully engage at long range.
At 16:25, the battlecruiser action intensified again when was hit by what may have been a combined salvo from Derfflinger and Seydlitz; she disintegrated when both forward magazines exploded, sinking with all but nine of her 1,275 man crew lost. (position 4). Commander von Hase, the first gunnery officer aboard Derfflingler, noted:
During the Run to the South, from 15:48 to 16:54, the German battlecruisers made an estimated total of forty-two hits on the British battlecruisers (nine on Lion, six on Princess Royal, seven on Queen Mary, 14 on Tiger, one on New Zealand, five on Indefatigable), and two more on the battleship Barham, compared with only eleven hits by the British battlecruisers (four on Lützow, four on Seydlitz, two on Moltke, one on von der Tann), and six hits by the battleships (one on Seydlitz, four on Moltke, one on von der Tann).
Shortly after 16:26, a salvo struck on or around , which was obscured by spray and smoke from shell bursts. A signalman promptly leapt on to the bridge of Lion and announced "Princess Royals blown up, Sir." Beatty famously turned to his flag captain, saying "Chatfield, there seems to be something wrong with our bloody ships today." (In popular legend, Beatty also immediately ordered his ships to "turn two points to port", i.e., two points nearer the enemy, but there is no official record of any such command or course change.) Princess Royal, as it turned out, was still afloat after the spray cleared.
At 16:30, Scheer's leading battleships sighted the distant battlecruiser action; soon after, of Beatty's 2nd Light Cruiser Squadron led by Commodore William Goodenough sighted the main body of Scheer's High Seas Fleet, dodging numerous heavy-calibre salvos to report in detail the German strength: 16 dreadnoughts with six older battleships. This was the first news that Beatty and Jellicoe had that Scheer and his battle fleet were even at sea. Simultaneously, an all-out destroyer action raged in the space between the opposing battlecruiser forces, as British and German destroyers fought with each other and attempted to torpedo the larger enemy ships. Each side fired many torpedoes, but both battlecruiser forces turned away from the attacks and all escaped harm except Seydlitz, which was hit forward at 16:57 by a torpedo fired by the British destroyer . Though taking on water, Seydlitz maintained speed. The destroyer , under the command of Captain Barry Bingham, led the British attacks. The British disabled the German torpedo boat , which the Germans soon abandoned and sank, and Petard then torpedoed and sank , her second score of the day. and rescued the crews of their sunken sister ships. But Nestor and another British destroyer – – were immobilised by shell hits, and were later sunk by Scheer's passing dreadnoughts. Bingham was rescued, and awarded the Victoria Cross for his leadership in the destroyer action.
Run to the north
As soon as he himself sighted the vanguard of Scheer's distant battleship line away, at 16:40, Beatty turned his battlecruiser force 180°, heading north to draw the Germans toward Jellicoe. (position 5). Beatty's withdrawal toward Jellicoe is called the "Run to the North", in which the tables turned and the Germans chased the British. Because Beatty once again failed to signal his intentions adequately, the battleships of the 5th Battle Squadron – which were too far behind to read his flags – found themselves passing the battlecruisers on an opposing course and heading directly toward the approaching main body of the High Seas Fleet. At 16:48, at extreme range, Scheer's leading battleships opened fire.
Meanwhile, at 16:47, having received Goodenough's signal and knowing that Beatty was now leading the German battle fleet north to him, Jellicoe signalled to his own forces that the fleet action they had waited so long for was finally imminent; at 16:51, by radio, he informed the Admiralty so in London.
The difficulties of the 5th Battle Squadron were compounded when Beatty gave the order to Evan-Thomas to "turn in succession" (rather than "turn together") at 16:48 as the battleships passed him. Evan-Thomas acknowledged the signal, but Lieutenant-Commander Ralph Seymour, Beatty's flag lieutenant, aggravated the situation when he did not haul down the flags (to execute the signal) for some minutes. At 16:55, when the 5BS had moved within range of the enemy battleships, Evan-Thomas issued his own flag command warning his squadron to expect sudden manoeuvres and to follow his lead, before starting to turn on his own initiative. The order to turn in succession would have resulted in all four ships turning in the same patch of sea as they reached it one by one, giving the High Seas Fleet repeated opportunity with ample time to find the proper range. However, the captain of the trailing ship () turned early, mitigating the adverse results.
For the next hour, the 5th Battle Squadron acted as Beatty's rearguard, drawing fire from all the German ships within range, while by 17:10 Beatty had deliberately eased his own squadron out of range of Hipper's now-superior battlecruiser force. Since visibility and firepower now favoured the Germans, there was no incentive for Beatty to risk further battlecruiser losses when his own gunnery could not be effective. Illustrating the imbalance, Beatty's battlecruisers did not score any hits on the Germans in this phase until 17:45, but they had rapidly received five more before he opened the range (four on Lion, of which three were by Lützow, and one on Tiger by Seydlitz). Now the only targets the Germans could reach, the ships of the 5th Battle Squadron, received simultaneous fire from Hipper's battlecruisers to the east (which HMS Barham and engaged) and Scheer's leading battleships to the south-east (which and Malaya engaged). Three took hits: Barham (four by Derfflinger), Warspite (two by Seydlitz), and Malaya (seven by the German battleships). Only Valiant was unscathed.
The four battleships were far better suited to take this sort of pounding than the battlecruisers, and none were lost, though Malaya suffered heavy damage, an ammunition fire, and heavy crew casualties. At the same time, the fire of the four British ships was accurate and effective. As the two British squadrons headed north at top speed, eagerly chased by the entire German fleet, the 5th Battle Squadron scored 13 hits on the enemy battlecruisers (four on Lützow, three on Derfflinger, six on Seydlitz) and five on battleships (although only one, on , did any serious damage). (position 6).
The fleets converge
Jellicoe was now aware that full fleet engagement was nearing, but had insufficient information on the position and course of the Germans. To assist Beatty, early in the battle at about 16:05, Jellicoe had ordered Rear-Admiral Horace Hood's 3rd Battlecruiser Squadron to speed ahead to find and support Beatty's force, and Hood was now racing SSE well in advance of Jellicoe's northern force. Rear-Admiral Arbuthnot's 1st Cruiser Squadron patrolled the van of Jellicoe's main battleship force as it advanced steadily to the south-east.
At 17:33, the armoured cruiser of Arbuthnot's squadron, on the far southwest flank of Jellicoe's force, came within view of , which was about ahead of Beatty with the 3rd Light Cruiser Squadron, establishing the first visual link between the converging bodies of the Grand Fleet. At 17:38, the scout cruiser , screening Hood's oncoming battlecruisers, was intercepted by the van of the German scouting forces under Rear-Admiral Boedicker.
Heavily outnumbered by Boedicker's four light cruisers, Chester was pounded before being relieved by Hood's heavy units, which swung westward for that purpose. Hood's flagship disabled the light cruiser shortly after 17:56. Wiesbaden became a sitting target for most of the British fleet during the next hour, but remained afloat and fired some torpedoes at the passing enemy battleships from long range. Meanwhile, Boedicker's other ships turned toward Hipper and Scheer in the mistaken belief that Hood was leading a larger force of British capital ships from the north and east. A chaotic destroyer action in mist and smoke ensued as German torpedo boats attempted to blunt the arrival of this new formation, but Hood's battlecruisers dodged all the torpedoes fired at them. In this action, after leading a torpedo counter-attack, the British destroyer was disabled, but continued to return fire at numerous passing enemy ships for the next hour.
Fleet action
Deployment
In the meantime, Beatty and Evan-Thomas had resumed their engagement with Hipper's battlecruisers, this time with the visual conditions to their advantage. With several of his ships damaged, Hipper turned back toward Scheer at around 18:00, just as Beatty's flagship Lion was finally sighted from Jellicoe's flagship Iron Duke. Jellicoe twice demanded the latest position of the German battlefleet from Beatty, who could not see the German battleships and failed to respond to the question until 18:14. Meanwhile, Jellicoe received confused sighting reports of varying accuracy and limited usefulness from light cruisers and battleships on the starboard (southern) flank of his force.
Jellicoe was in a worrying position. He needed to know the location of the German fleet to judge when and how to deploy his battleships from their cruising formation (six columns of four ships each) into a single battle line. The deployment could be on either the westernmost or the easternmost column, and had to be carried out before the Germans arrived; but early deployment could mean losing any chance of a decisive encounter. Deploying to the west would bring his fleet closer to Scheer, gaining valuable time as dusk approached, but the Germans might arrive before the manoeuvre was complete. Deploying to the east would take the force away from Scheer, but Jellicoe's ships might be able to cross the "T", and visibility would strongly favour British gunnery – Scheer's forces would be silhouetted against the setting sun to the west, while the Grand Fleet would be indistinct against the dark skies to the north and east, and would be hidden by reflection of the low sunlight off intervening haze and smoke. Deployment would take twenty irreplaceable minutes, and the fleets were closing at full speed. In one of the most critical and difficult tactical command decisions of the entire war, Jellicoe ordered deployment to the east at 18:15.
Windy Corner
Meanwhile, Hipper had rejoined Scheer, and the combined High Seas Fleet was heading north, directly toward Jellicoe. Scheer had no indication that Jellicoe was at sea, let alone that he was bearing down from the north-west, and was distracted by the intervention of Hood's ships to his north and east. Beatty's four surviving battlecruisers were now crossing the van of the British dreadnoughts to join Hood's three battlecruisers; at this time, Arbuthnot's flagship, the armoured cruiser , and her squadron-mate both charged across Beatty's bows, and Lion narrowly avoided a collision with Warrior. Nearby, numerous British light cruisers and destroyers on the south-western flank of the deploying battleships were also crossing each other's courses in attempts to reach their proper stations, often barely escaping collisions, and under fire from some of the approaching German ships. This period of peril and heavy traffic attending the merger and deployment of the British forces later became known as "Windy Corner".
Arbuthnot was attracted by the drifting hull of the crippled Wiesbaden. With Warrior, Defence closed in for the kill, only to blunder right into the gun sights of Hipper's and Scheer's oncoming capital ships. Defence was deluged by heavy-calibre gunfire from many German battleships, which detonated her magazines in a spectacular explosion viewed by most of the deploying Grand Fleet. She sank with all hands (903 officers and men). Warrior was also hit badly, but was spared destruction by a mishap to the nearby battleship Warspite. Warspite had her steering gear overheat and jam under heavy load at high speed as the 5th Battle Squadron made a turn to the north at 18:19. Steaming at top speed in wide circles, Warspite attracted the attention of German dreadnoughts and took 13 hits, inadvertently drawing fire away from the hapless Warrior. Warspite was brought back under control and survived the onslaught, but was badly damaged, had to reduce speed, and withdrew northward; later (at 21:07), she was ordered back to port by Evan-Thomas. Warspite went on to a long and illustrious career, serving also in World War II. Warrior, on the other hand, was abandoned and sank the next day after her crew was taken off at 08:25 on 1 June by Engadine, which towed the sinking armoured cruiser during the night.
As Defence sank and Warspite circled, at about 18:19, Hipper moved within range of Hood's 3rd Battlecruiser Squadron, but was still also within range of Beatty's ships. At first, visibility favoured the British: hit Derfflinger three times and Seydlitz once, while Lützow quickly took 10 hits from Lion, and Invincible, including two below-waterline hits forward by Invincible that would ultimately doom Hipper's flagship. But at 18:30, Invincible abruptly appeared as a clear target before Lützow and Derfflinger. The two German ships then fired three salvoes each at Invincible, and sank her in 90 seconds. A shell from the third salvo struck Invincibles Q-turret amidships, detonating the magazines below and causing her to blow up and sink. All but six of her crew of 1,032 officers and men, including Rear-Admiral Hood, were killed. Of the remaining British battlecruisers, only Princess Royal received heavy-calibre hits at this time (two by the battleship Markgraf). Lützow, flooding forward and unable to communicate by radio, was now out of action and began to attempt to withdraw; therefore Hipper left his flagship and transferred to the torpedo boat , hoping to board one of the other battlecruisers later.
Crossing the T
By 18:30, the main battle fleet action was joined for the first time, with Jellicoe effectively "crossing Scheer's T". The officers on the lead German battleships, and Scheer himself, were taken completely by surprise when they emerged from drifting clouds of smoky mist to suddenly find themselves facing the massed firepower of the entire Grand Fleet main battle line, which they did not know was even at sea. Jellicoe's flagship Iron Duke quickly scored seven hits on the lead German dreadnought, , but in this brief exchange, which lasted only minutes, as few as 10 of the Grand Fleet's 24 dreadnoughts actually opened fire. The Germans were hampered by poor visibility, in addition to being in an unfavourable tactical position, just as Jellicoe had intended. Realising he was heading into a death trap, Scheer ordered his fleet to turn and disengage at 18:33. Under a pall of smoke and mist, Scheer's forces succeeded in disengaging by an expertly executed 180° turn in unison ("battle about turn to starboard", German Gefechtskehrtwendung nach Steuerbord), which was a well-practised emergency manoeuvre of the High Seas Fleet. Scheer declared:
Conscious of the risks to his capital ships posed by torpedoes, Jellicoe did not chase directly but headed south, determined to keep the High Seas Fleet west of him. Starting at 18:40, battleships at the rear of Jellicoe's line were in fact sighting and avoiding torpedoes, and at 18:54 was hit by a torpedo (probably from the disabled Wiesbaden), which reduced her speed to . Meanwhile, Scheer, knowing that it was not yet dark enough to escape and that his fleet would suffer terribly in a stern chase, doubled back to the east at 18:55. In his memoirs he wrote, "the manoeuvre would be bound to surprise the enemy, to upset his plans for the rest of the day, and if the blow fell heavily it would facilitate the breaking loose at night." But the turn to the east took his ships, again, directly towards Jellicoe's fully deployed battle line.
Simultaneously, the disabled British destroyer HMS Shark fought desperately against a group of four German torpedo boats and disabled with gunfire, but was eventually torpedoed and sunk at 19:02 by the German destroyer . Sharks Captain Loftus Jones was awarded the Victoria Cross for his heroism in continuing to fight against all odds.
Turn Of The Battle
Commodore Goodenough's 2nd Light Cruiser Squadron dodged the fire of German battleships for a second time to re-establish contact with the High Seas Fleet shortly after 19:00. By 19:15, Jellicoe had crossed Scheer's "T" again. This time his arc of fire was tighter and deadlier, causing severe damage to the German battleships, particularly Rear-Admiral Behncke's leading 3rd Squadron (SMS König, , Markgraf, and all being hit, along with of the 1st Squadron), while on the British side, only the battleship was hit (twice, by Seydlitz but with little damage done).
At 19:17, for the second time in less than an hour, Scheer turned his outnumbered and out-gunned fleet to the west using the "battle about turn" (German: Gefechtskehrtwendung), but this time it was executed only with difficulty, as the High Seas Fleet's lead squadrons began to lose formation under concentrated gunfire. To deter a British chase, Scheer ordered a major torpedo attack by his destroyers and a potentially sacrificial charge by Scouting Group I's four remaining battlecruisers. Hipper was still aboard the torpedo boat G39 and was unable to command his squadron for this attack. Therefore, Derfflinger, under Captain Hartog, led the already badly damaged German battlecruisers directly into "the greatest concentration of naval gunfire any fleet commander had ever faced", at ranges down to .
In what became known as the "death ride", all the battlecruisers except Moltke were hit and further damaged, as 18 of the British battleships fired at them simultaneously. Derfflinger had two main gun turrets destroyed. The crews of Scouting Group I suffered heavy casualties, but survived the pounding and veered away with the other battlecruisers once Scheer was out of trouble and the German destroyers were moving in to attack. In this brief but intense portion of the engagement, from about 19:05 to about 19:30, the Germans sustained a total of 37 heavy hits while inflicting only two; Derfflinger alone received 14.
While his battlecruisers drew the fire of the British fleet, Scheer slipped away, laying smoke screens. Meanwhile, from about 19:16 to about 19:40, the British battleships were also engaging Scheer's torpedo boats, which executed several waves of torpedo attacks to cover his withdrawal. Jellicoe's ships turned away from the attacks and successfully evaded all 31 of the torpedoes launched at them – though, in several cases, only barely – and sank the German destroyer S35, attributed to a salvo from Iron Duke. British light forces also sank V48, which had previously been disabled by HMS Shark. This action, and the turn away, cost the British critical time and range in the last hour of daylight – as Scheer intended, allowing him to get his heavy ships out of immediate danger.
The last major exchanges between capital ships in this battle took place just after sunset, from about 20:19 to about 20:35, as the surviving British battlecruisers caught up with their German counterparts, which were briefly relieved by Rear-Admiral Mauve's obsolete pre-dreadnoughts (the German 2nd Squadron). The British received one heavy hit on Princess Royal but scored five more on Seydlitz and three on other German ships. As twilight faded to night and exchanged a few final shots with , neither side could have imagined that the only encounter between British and German dreadnoughts in the entire war was already concluded.
Night action and German withdrawal
At 21:00, Jellicoe, conscious of the Grand Fleet's deficiencies in night fighting, decided to try to avoid a major engagement until early dawn. He placed a screen of cruisers and destroyers behind his battle fleet to patrol the rear as he headed south to guard Scheer's expected escape route. In reality, Scheer opted to cross Jellicoe's wake and escape via Horns Reef. Luckily for Scheer, most of the light forces in Jellicoe's rearguard failed to report the seven separate encounters with the German fleet during the night; the very few radio reports that were sent to the British flagship were never received, possibly because the Germans were jamming British frequencies. Many of the destroyers failed to make the most of their opportunities to attack discovered ships, despite Jellicoe's expectations that the destroyer forces would, if necessary, be able to block the path of the German fleet.
Jellicoe and his commanders did not understand that the furious gunfire and explosions to the north (seen and heard for hours by all the British battleships) indicated that the German heavy ships were breaking through the screen astern of the British fleet. Instead, it was believed that the fighting was the result of night attacks by German destroyers. The most powerful British ships of all (the 15-inch-guns of the 5th Battle Squadron) directly observed German battleships crossing astern of them in action with British light forces, at ranges of or less, and gunners on HMS Malaya made ready to fire, but her captain declined, deferring to the authority of Rear-Admiral Evan-Thomas – and neither commander reported the sightings to Jellicoe, assuming that he could see for himself and that revealing the fleet's position by radio signals or gunfire was unwise.
While the nature of Scheer's escape, and Jellicoe's inaction, indicate the overall German superiority in night fighting, the results of the night action were no more clear-cut than were those of the battle as a whole. In the first of many surprise encounters by darkened ships at point-blank range, Southampton, Commodore Goodenough's flagship, which had scouted so proficiently, was heavily damaged in action with a German Scouting Group composed of light cruisers, but managed to torpedo , which went down at 22:23 with all hands (320 officers and men).
From 23:20 to approximately 02:15, several British destroyer flotillas launched torpedo attacks on the German battle fleet in a series of violent and chaotic engagements at extremely short range (often under ). At the cost of five destroyers sunk and some others damaged, they managed to torpedo the light cruiser , which sank several hours later, and the pre-dreadnought , which blew up and sank with all hands (839 officers and men) at 03:10 during the last wave of attacks before dawn. Three of the British destroyers collided in the chaos, and the German battleship rammed the British destroyer , blowing away most of the British ship's superstructure merely with the muzzle blast of its big guns, which could not be aimed low enough to hit the ship. Nassau was left with an hole in her side, reducing her maximum speed to , while the removed plating was left lying on Spitfires deck. Spitfire survived and made it back to port. Another German cruiser, Elbing, was accidentally rammed by the dreadnought and abandoned, sinking early the next day. Of the British destroyers, , , , and were lost during the night fighting.
Just after midnight on 1 June, and other German battleships sank Black Prince of the ill-fated 1st Cruiser Squadron, which had blundered into the German battle line. Deployed as part of a screening force several miles ahead of the main force of the Grand Fleet, Black Prince had lost contact in the darkness and took a position near what she thought was the British line. The Germans soon identified the new addition to their line and opened fire. Overwhelmed by point-blank gunfire, Black Prince blew up, (all hands – 857 officers and men – were lost), as her squadron leader Defence had done hours earlier. Lost in the darkness, the battlecruisers Moltke and Seydlitz had similar point-blank encounters with the British battle line and were recognised, but were spared the fate of Black Prince when the captains of the British ships, again, declined to open fire, reluctant to reveal their fleet's position.
At 01:45, the sinking battlecruiser Lützow – fatally damaged by Invincible during the main action – was torpedoed by the destroyer on orders of Lützows Captain Viktor von Harder after the surviving crew of 1,150 transferred to destroyers that came alongside. At 02:15, the German torpedo boat suddenly had its bow blown off; V2 and V6 came alongside and took off the remaining crew, and the V2 then sank the hulk. Since there was no enemy nearby, it was assumed that she had hit a mine or had been torpedoed by a submarine.
At 02:15, five British ships of the 13th Destroyer Flotilla under Captain James Uchtred Farie regrouped and headed south. At 02:25, they sighted the rear of the German line. inquired of the leader as to whether he thought they were British or German ships. Answering that he thought they were German, Farie then veered off to the east and away from the German line. All but Moresby in the rear followed, as through the gloom she sighted what she thought were four pre-dreadnought battleships away. She hoisted a flag signal indicating that the enemy was to the west and then closed to firing range, letting off a torpedo set for high running at 02:37, then veering off to rejoin her flotilla. The four pre-dreadnought battleships were in fact two pre-dreadnoughts, Schleswig-Holstein and , and the battlecruisers Von der Tann and Derfflinger. Von der Tann sighted the torpedo and was forced to steer sharply to starboard to avoid it as it passed close to her bows. Moresby rejoined Champion convinced she had scored a hit.
Finally, at 05:20, as Scheer's fleet was safely on its way home, the battleship struck a British mine on her starboard side, killing one man and wounding ten, but was able to make port. Seydlitz, critically damaged and very nearly sinking, barely survived the return voyage: after grounding and taking on even more water on the evening of 1 June, she had to be assisted stern first into port, where she dropped anchor at 07:30 on the morning of 2 June.
The Germans were helped in their escape by the failure of the British Admiralty in London to pass on seven critical radio intercepts obtained by naval intelligence indicating the true position, course and intentions of the High Seas Fleet during the night. One message was transmitted to Jellicoe at 23:15 that accurately reported the German fleet's course and speed as of 21:14. However, the erroneous signal from earlier in the day that reported the German fleet still in port, and an intelligence signal received at 22:45 giving another unlikely position for the German fleet, had reduced his confidence in intelligence reports. Had the other messages been forwarded, which confirmed the information received at 23:15, or had British ships reported accurately sightings and engagements with German destroyers, cruisers and battleships, then Jellicoe could have altered course to intercept Scheer at the Horns Reef. The unsent intercepted messages had been duly filed by the junior officer left on duty that night, who failed to appreciate their significance. By the time Jellicoe finally learned of Scheer's whereabouts at 04:15, the German fleet was too far away to catch and it was clear that the battle could no longer be resumed.
Outcome
As both the Grand Fleet and the High Seas Fleet could claim to have at least partially satisfied their objectives, both Britain and Germany have at various points claimed victory in the Battle of Jutland. Which nation was actually victorious, or if indeed there was a victor at all, remains controversial to this day and there is no single consensus over the outcome.
Reporting
At midday on 2 June, German authorities released a press statement claiming a victory, including the destruction of a battleship, two battlecruisers, two armoured cruisers, a light cruiser, a submarine and several destroyers, for the loss of Pommern and Wiesbaden. News that Lützow, Elbing and Rostock had been scuttled was withheld, on the grounds this information would not be known to the enemy. The victory of the Skagerrak was celebrated in the press, children were given a holiday and the nation celebrated. The Kaiser announced a new chapter in world history. Post-war, the official German history hailed the battle as a victory and it continued to be celebrated until after World War II.
In Britain, the first official news came from German wireless broadcasts. Ships began to arrive in port, their crews sending messages to friends and relatives both of their survival and the loss of some 6,000 others. The authorities considered suppressing the news, but it had already spread widely. Some crews coming ashore found rumours had already reported them dead to relatives, while others were jeered for the defeat they had suffered. At 19:00 on 2 June, the Admiralty released a statement based on information from Jellicoe containing the bare news of losses on each side. The following day British newspapers reported a German victory. The Daily Mirror described the German Director of the Naval Department telling the Reichstag: "The result of the fighting is a significant success for our forces against a much stronger adversary". The British population was shocked that the long anticipated battle had been a victory for Germany. On 3 June, the Admiralty issued a further statement expanding on German losses, and another the following day with exaggerated claims. However, on 7 June the German admission of the losses of Lützow and Rostock started to redress the sense of the battle as a loss. International perception of the battle began to change towards a qualified British victory, the German attempt to change the balance of power in the North Sea having been repulsed. In July, bad news from the Somme campaign swept concern over Jutland from the British consciousness.
Assessments
At Jutland, the Germans, with a 99-strong fleet, sank of British ships, while a 151-strong British fleet sank of German ships. The British lost 6,094 seamen; the Germans 2,551. Several other ships were badly damaged, such as Lion and Seydlitz.
As of the summer of 1916, the High Seas Fleet's strategy was to whittle away the numerical advantage of the Royal Navy by bringing its full strength to bear against isolated squadrons of enemy capital ships whilst declining to be drawn into a general fleet battle until it had achieved something resembling parity in heavy ships. In tactical terms, the High Seas Fleet had clearly inflicted significantly greater losses on the Grand Fleet than it had suffered itself at Jutland, and the Germans never had any intention of attempting to hold the site of the battle, so some historians support the German claim of victory at Jutland.
However, Scheer seems to have quickly realised that further battles with a similar rate of attrition would exhaust the High Seas Fleet long before they reduced the Grand Fleet. Further, after the 19 August advance was nearly intercepted by the Grand Fleet, he no longer believed that it would be possible to trap a single squadron of Royal Navy warships without having the Grand Fleet intervene before he could return to port. Therefore, the High Seas Fleet abandoned its forays into the North Sea and turned its attention to the Baltic for most of 1917 whilst Scheer switched tactics against Britain to unrestricted submarine warfare in the Atlantic.
At a strategic level, the outcome has been the subject of a huge amount of literature with no clear consensus. The battle was widely viewed as indecisive in the immediate aftermath, and this view remains influential.
Despite numerical superiority, the British had been disappointed in their hopes for a decisive battle comparable to Trafalgar and the objective of the influential strategic doctrines of Alfred Mahan. The High Seas Fleet survived as a fleet in being. Most of its losses were made good within a month – even Seydlitz, the most badly damaged ship to survive the battle, was repaired by October and officially back in service by November. However, the Germans had failed in their objective of destroying a substantial portion of the British Fleet, and no progress had been made towards the goal of allowing the High Seas Fleet to operate in the Atlantic Ocean.
Subsequently, there has been considerable support for the view of Jutland as a strategic victory for the British. While the British had not destroyed the German fleet and had lost more ships and lives than their enemy, the Germans had retreated to harbour; at the end of the battle, the British were in command of the area. Britain enforced the blockade, reducing Germany's vital imports to 55%, affecting the ability of Germany to fight the war.
The German fleet would only sortie into the North Sea thrice more, with a raid on 19 August, one in October 1916, and another in April 1918. All three were unopposed by capital ships and quickly aborted as neither side was prepared to take the risks of mines and submarines.
Apart from these three abortive operations the High Seas Fleet – unwilling to risk another encounter with the British fleet – confined its activities to the Baltic Sea for the remainder of the war. Jellicoe issued an order prohibiting the Grand Fleet from steaming south of the line of Horns Reef owing to the threat of mines and U-boats. A German naval expert, writing publicly about Jutland in November 1918, commented, "Our Fleet losses were severe. On 1 June 1916, it was clear to every thinking person that this battle must, and would be, the last one".
There is also significant support for viewing the battle as a German tactical victory, due to the much higher losses sustained by the British. The Germans declared a great victory immediately afterwards, while the British by contrast had only reported short and simple results. In response to public outrage, the First Lord of the Admiralty Arthur Balfour asked Winston Churchill to write a second report that was more positive and detailed.
At the end of the battle, the British had maintained their numerical superiority and had 23 dreadnoughts ready and four battlecruisers still able to fight, while the Germans had only 10 dreadnoughts. One month after the battle, the Grand Fleet was stronger than it had been before sailing to Jutland. Warspite was dry-docked at Rosyth, returning to the fleet on 22 July, while Malaya was repaired in the floating dock at Invergordon, returning to duty on 11 July. Barham was docked for a month at Devonport before undergoing speed trials and returning to Scapa Flow on 8 July. Princess Royal stayed initially at Rosyth but transferred to dry dock at Portsmouth before returning to duty at Rosyth 21 July. Tiger was dry-docked at Rosyth and ready for service 2 July. Queen Elizabeth, Emperor of India and , which had been undergoing maintenance at the time of the battle, returned to the fleet immediately, followed shortly after by Resolution and Ramillies. Lion initially remained ready for sea duty despite the damaged turret, then underwent a month's repairs in July when Q turret was removed temporarily and replaced in September.
A third view, presented in a number of recent evaluations, is that Jutland, the last major fleet action between battleships, illustrated the irrelevance of battleship fleets following the development of the submarine, mine and torpedo. In this view, the most important consequence of Jutland was the decision of the Germans to engage in unrestricted submarine warfare. Although large numbers of battleships were constructed in the decades between the wars, it has been argued that this outcome reflected the social dominance among naval decision-makers of battleship advocates who constrained technological choices to fit traditional paradigms of fleet action. Battleships played a relatively minor role in World War II, in which the submarine and aircraft carrier emerged as the dominant offensive weapons of naval warfare.
British self-critique
The official British Admiralty examination of the Grand Fleet's performance recognised two main problems:
British armour-piercing shells exploded outside the German armour rather than penetrating and exploding within. As a result, some German ships with only -thick armour survived hits from projectiles. Had these shells penetrated the armour and then exploded, German losses would probably have been far greater.
Communication between ships and the British commander-in-chief was comparatively poor. For most of the battle, Jellicoe had no idea where the German ships were, even though British ships were in contact. They failed to report enemy positions, contrary to the Grand Fleet's Battle Plan. Some of the most important signalling was carried out solely by flag instead of wireless or using redundant methods to ensure communications—a questionable procedure, given the mixture of haze and smoke that obscured the battlefield, and a foreshadowing of similar failures by habit-bound and conservatively minded professional officers of rank to take advantage of new technology in World War II.
Shell performance
German armour-piercing shells were far more effective than the British ones, which often failed to penetrate heavy armour. The issue particularly concerned shells striking at oblique angles, which became increasingly the case at long range. Germany had adopted trinitrotoluene (TNT) as the explosive filler for artillery shells in 1902, while the United Kingdom was still using a picric acid mixture (Lyddite). The shock of impact of a shell against armour often prematurely detonated Lyddite in advance of fuze function while TNT detonation could be delayed until after the shell had penetrated and the fuze had functioned in the vulnerable area behind the armour plate. Some 17 British shells hit the side armour of the German dreadnoughts or battlecruisers. Of these, four would not have penetrated under any circumstances. Of the remaining 13, one penetrated the armour and exploded inside. This showed a 7.5% chance of proper shell function on the British side, a result of overly brittle shells and Lyddite exploding too soon.
The issue of poorly performing shells had been known to Jellicoe, who as Third Sea Lord from 1908 to 1910 had ordered new shells to be designed. However, the matter had not been followed through after his posting to sea and new shells had never been thoroughly tested. Beatty discovered the problem at a party aboard Lion a short time after the battle, when a Swedish Naval officer was present. He had recently visited Berlin, where the German navy had scoffed at how British shells had broken up on their ships' armour. The question of shell effectiveness had also been raised after the Battle of Dogger Bank, but no action had been taken. Hipper later commented, "It was nothing but the poor quality of their bursting charges which saved us from disaster."
Admiral Dreyer, writing later about the battle, during which he had been captain of the British flagship Iron Duke, estimated that effective shells as later introduced would have led to the sinking of six more German capital ships, based upon the actual number of hits achieved in the battle. The system of testing shells, which remained in use up to 1944, meant that, statistically, a batch of shells of which 70% were faulty stood an even chance of being accepted. Indeed, even shells that failed this relatively mild test had still been issued to ships. Analysis of the test results afterwards by the Ordnance Board suggested the likelihood that 30–70% of shells would not have passed the standard penetration test specified by the Admiralty.
Efforts to replace the shells were initially resisted by the Admiralty, and action was not taken until Jellicoe became First Sea Lord in December 1916. As an initial response, the worst of the existing shells were withdrawn from ships in early 1917 and replaced from reserve supplies. New shells were designed, but did not arrive until April 1918, and were never used in action.
Battlecruiser losses
British battlecruisers were designed to chase and destroy enemy cruisers from out of the range of those ships. They were not designed to be ships of the line and exchange broadsides with the enemy. One German and three British battlecruisers were sunk—but none were destroyed by enemy shells penetrating the belt armour and detonating the magazines. Each of the British battlecruisers was penetrated through a turret roof and her magazines ignited by flash fires passing through the turret and shell-handling rooms. Lützow sustained 24 hits and her flooding could not be contained. She was eventually sunk by her escorts' torpedoes after most of her crew had been safely removed (though six trapped stokers died when the ship was scuttled). Derfflinger and Seydlitz sustained 22 hits each but reached port (although in Seydlitzs case only just).
Jellicoe and Beatty, as well as other senior officers, gave an impression that the loss of the battlecruisers was caused by weak armour, despite reports by two committees and earlier statements by Jellicoe and other senior officers that Cordite and its management were to blame. This led to calls for armour to be increased, and an additional was placed over the relatively thin decks above magazines. To compensate for the increase in weight, ships had to carry correspondingly less fuel, water and other supplies. Whether or not thin deck armour was a potential weakness of British ships, the battle provided no evidence that it was the case. At least amongst the surviving ships, no enemy shell was found to have penetrated deck armour anywhere. The design of the new battlecruiser (which had started building at the time of the battle) was altered to give her of additional armour.
Ammunition handling
British and German propellant charges differed in packaging, handling, and chemistry. The British propellant was of two types, MK1 and MD. The Mark 1 cordite had a formula of 37% nitrocellulose, 58% nitroglycerine, and 5% petroleum jelly. It was a good propellant but burned hot and caused an erosion problem in gun barrels. The petroleum jelly served as both a lubricant and a stabiliser. Cordite MD was developed to reduce barrel wear, its formula being 65% nitrocellulose, 30% nitroglycerine, and 5% petroleum jelly. While cordite MD solved the gun-barrel erosion issue, it did nothing to improve its storage properties, which were poor. Cordite was very sensitive to variations of temperature, and acid propagation/cordite deterioration would take place at a very rapid rate. Cordite MD also shed micro-dust particles of nitrocellulose and iron pyrite. While cordite propellant was manageable, it required a vigilant gunnery officer, strict cordite lot control, and frequent testing of the cordite lots in the ships' magazines.
British cordite propellant (when uncased and exposed in the silk bag) tended to burn violently, causing uncontrollable "flash fires" when ignited by nearby shell hits. In 1945, a test was conducted by the U.S.N. Bureau of Ordnance (Bulletin of Ordnance Information, No.245, pp. 54–60) testing the sensitivity of cordite to then-current U.S. Naval propellant powders against a measurable and repeatable flash source. It found that cordite would ignite at from the flash, the current U.S. powder at , and the U.S. flashless powder at .
This meant that about 75 times the propellant would immediately ignite when exposed to flash, as compared to the U.S. powder. British ships had inadequate protection against these flash fires. German propellant (RP C/12, handled in brass cartridge cases) was less vulnerable and less volatile in composition. German propellants were not that different in composition from cordite—with one major exception: centralite. This was symmetrical diethyl diphenyl urea, which served as a stabiliser that was superior to the petroleum jelly used in British practice. It stored better and burned but did not explode. Stored and used in brass cases, it proved much less sensitive to flash. RP C/12 was composed of 64.13% nitrocellulose, 29.77% nitroglycerine, 5.75% centralite, 0.25% magnesium oxide and 0.10% graphite.
The Royal Navy Battle Cruiser Fleet had also emphasised speed in ammunition handling over established safety protocol. In practice drills, cordite could not be supplied to the guns rapidly enough through the hoists and hatches. To bring up the propellant in good time to load for the next broadside, many safety doors were kept open that should have been shut to safeguard against flash fires. Bags of cordite were also stocked and kept locally, creating a total breakdown of safety design features. By staging charges in the chambers between the gun turret and magazine, the Royal Navy enhanced their rate of fire but left their ships vulnerable to chain reaction ammunition fires and magazine explosions.Campbell, pp. 371–372 This 'bad safety habit' carried over into real battle practices. Furthermore, the doctrine of a high rate of fire also led to the decision in 1913 to increase the supply of shells and cordite held on the British ships by 50%, for fear of running out of ammunition. When this exceeded the capacity of the ships' magazines, cordite was stored in insecure places.
The British cordite charges were stored two silk bags to a metal cylindrical container, with a 16-oz gunpowder igniter charge, which was covered with a thick paper wad, four charges being used on each projectile. The gun crews were removing the charges from their containers and removing the paper covering over the gunpowder igniter charges. The effect of having eight loads at the ready was to have of exposed explosive, with each charge leaking small amounts of gunpowder from the igniter bags. In effect, the gun crews had laid an explosive train from the turret to the magazines, and one shell hit to a battlecruiser turret was enough to end a ship.
A diving expedition during the summer of 2003 provided corroboration of this practice. It examined the wrecks of Invincible, Queen Mary, Defence, and Lützow to investigate the cause of the British ships' tendency to suffer from internal explosions. From this evidence, a major part of the blame may be laid on lax handling of the cordite propellant for the shells of the main guns. The wreck of the Queen Mary revealed cordite containers stacked in the working chamber of the X turret instead of the magazine.
There was a further difference in the propellant itself. While the German RP C/12 burned when exposed to fire, it did not explode, as opposed to cordite. RP C/12 was extensively studied by the British and, after World War I, would form the basis of the later Cordite SC.
The memoirs of Alexander Grant, Gunner on Lion, suggest that some British officers were aware of the dangers of careless handling of cordite:
Grant had already introduced measures onboard Lion to limit the number of cartridges kept outside the magazine and to ensure doors were kept closed, probably contributing to her survival.
On 5 June 1916, the First Lord of the Admiralty advised Cabinet Members that the three battlecruisers had been lost due to unsafe cordite management.
On 22 November 1916, following detailed interviews of the survivors of the destroyed battlecruisers, the Third Sea Lord, Rear Admiral Tudor, issued a report detailing the stacking of charges by the gun crews in the handling rooms to speed up loading of the guns.
After the battle, the B.C.F. Gunnery Committee issued a report (at the command of Admiral David Beatty) advocating immediate changes in flash protection and charge handling. It reported, among other things, that:
Some vent plates in magazines allowed flash into the magazines and should be retro-fitted to a new standard.
Bulkheads in HMS Lions magazine showed buckling from fire under pressure (overpressure) – despite being flooded and therefore supported by water pressure – and must be made stronger.
Doors opening inward to magazines were an extreme danger.
Current designs of turrets could not eliminate flash from shell bursts in the turret from reaching the handling rooms.
Ignition pads must not be attached to charges but instead be placed just before ramming.
Better methods must be found for safe storage of ready charges than the current method.
Some method for rapidly drowning charges already in the handling path must be devised.
Handling scuttles (special flash-proof fittings for moving propellant charges through ship's bulkheads), designed to handle overpressure, must be fitted.
The United States Navy in 1939 had quantities of Cordite N, a Canadian propellant that was much improved, yet its Bureau of Ordnance objected strongly to its use onboard U.S. warships, considering it unsuitable as a naval propellant due to its inclusion of nitroglycerin.
Gunnery
British gunnery control systems, based on Dreyer tables, were well in advance of the German ones, as demonstrated by the proportion of main calibre hits made on the German fleet. Because of its demonstrated advantages, it was installed on ships progressively as the war went on, had been fitted to a majority of British capital ships by May 1916, and had been installed on the main guns of all but two of the Grand Fleet's capital ships. The Royal Navy used centralised fire-control systems on their capital ships, directed from a point high up on the ship where the fall of shells could best be seen, utilising a director sight for both training and elevating the guns. In contrast, the German battlecruisers controlled the fire of turrets using a training-only director, which also did not fire the guns at once. The rest of the German capital ships were without even this innovation. German range-finding equipment was generally superior to the British FT24, as its operators were trained to a higher standard due to the complexity of the Zeiss range finders. Their stereoscopic design meant that in certain conditions they could range on a target enshrouded by smoke. The German equipment was not superior in range to the British Barr & Stroud rangefinder found in the newest British capital ships, and, unlike the British range finders, the German range takers had to be replaced as often as every thirty minutes, as their eyesight became impaired, affecting the ranges provided to their gunnery equipment.
The results of the battle confirmed the value of firing guns by centralised director. The battle prompted the Royal Navy to install director firing systems in cruisers and destroyers, where it had not thus far been used, and for secondary armament on battleships.
German ships were considered to have been quicker in determining the correct range to targets, thus obtaining an early advantage. The British used a 'bracket system', whereby a salvo was fired at the best-guess range and, depending where it landed, the range was progressively corrected up or down until successive shots were landing in front of and behind the enemy. The Germans used a 'ladder system', whereby an initial volley of three shots at different ranges was used, with the centre shot at the best-guess range. The ladder system allowed the gunners to get ranging information from the three shots more quickly than the bracket system, which required waiting between shots to see how the last had landed. British ships adopted the German system.
It was determined that range finders of the sort issued to most British ships were not adequate at long range and did not perform as well as the range finders on some of the most modern ships. In 1917, range finders of base lengths of were introduced on the battleships to improve accuracy.
Signalling
Throughout the battle, British ships experienced difficulties with communications, whereas the Germans did not suffer such problems. The British preferred signalling using ship-to-ship flag and lamp signals, avoiding wireless, whereas the Germans used wireless successfully. One conclusion drawn was that flag signals were not a satisfactory way to control the fleet. Experience using lamps, particularly at night when issuing challenges to other ships, demonstrated this was an excellent way to advertise your precise location to an enemy, inviting a reply by gunfire. Recognition signals by lamp, once seen, could also easily be copied in future engagements.
British ships both failed to report engagements with the enemy but also, in the case of cruisers and destroyers, failed to actively seek out the enemy. A culture had arisen within the fleet of not acting without orders, which could prove fatal when any circumstances prevented orders being sent or received. Commanders failed to engage the enemy because they believed other, more senior officers must also be aware of the enemy nearby, and would have given orders to act if this was expected. Wireless, the most direct way to pass messages across the fleet (although it was being jammed by German ships), was avoided either for perceived reasons of not giving away the presence of ships or for fear of cluttering up the airwaves with unnecessary reports.
Fleet Standing Orders
Naval operations were governed by standing orders issued to all the ships. These attempted to set out what ships should do in all circumstances, particularly in situations where ships would have to react without referring to higher authority, or when communications failed. A number of changes were introduced as a result of experience gained in the battle.
A new signal was introduced instructing squadron commanders to act independently as they thought best while still supporting the main fleet, particularly for use when circumstances would make it difficult to send detailed orders. The description stressed that this was not intended to be the only time commanders might take independent action, but was intended to make plain times when they definitely should. Similarly, instructions on what to do if the fleet was instructed to take evasive action against torpedoes were amended. Commanders were given discretion that if their part of the fleet was not under immediate attack, they should continue engaging the enemy rather than turning away with the rest of the fleet. In this battle, when the fleet turned away from Scheer's destroyer attack covering his retreat, not all the British ships had been affected, and could have continued to engage the enemy.
A number of opportunities to attack enemy ships by torpedo had presented themselves but had been missed. All ships, not just the destroyers armed principally with torpedoes but also battleships, were reminded that they carried torpedoes intended to be used whenever an opportunity arose. Destroyers were instructed to close the enemy fleet to fire torpedoes as soon as engagements between the main ships on either side would keep enemy guns busy directed at larger targets. Destroyers should also be ready to immediately engage enemy destroyers if they should launch an attack, endeavouring to disrupt their chances of launching torpedoes and keep them away from the main fleet.
To add some flexibility when deploying for attack, a new signal was provided for deploying the fleet to the centre, rather than as previously only either to left or right of the standard closed-up formation for travelling. The fast and powerful 5th Battle Squadron was moved to the front of the cruising formation so it would have the option of deploying left or right depending upon the enemy position. In the event of engagements at night, although the fleet still preferred to avoid night fighting, a destroyer and cruiser squadron would be specifically detailed to seek out the enemy and launch destroyer attacks.
Controversy
At the time, Jellicoe was criticised for his caution and for allowing Scheer to escape. Beatty, in particular, was convinced that Jellicoe had missed a tremendous opportunity to annihilate the High Seas Fleet and win what would amount to another Trafalgar. Jellicoe was promoted away from active command to become First Sea Lord, the professional head of the Royal Navy, while Beatty replaced him as commander of the Grand Fleet.
The controversy raged within the navy and in public for about a decade after the war. Criticism focused on Jellicoe's decision at 19:15. Scheer had ordered his cruisers and destroyers forward in a torpedo attack to cover the turning away of his battleships. Jellicoe chose to turn to the south-east, and so keep out of range of the torpedoes. Supporters of Jellicoe, including the historian Cyril Falls, pointed to the folly of risking defeat in battle when one already has command of the sea. Jellicoe himself, in a letter to the Admiralty seventeen months before the battle, said that he intended to turn his fleet away from any mass torpedo attack (that being the universally accepted proper tactical response to such attacks, practised by all the major navies of the world). He said that, in the event of a fleet engagement in which the enemy turned away, he would assume they intended to draw him over mines or submarines, and he would decline to be so drawn. The Admiralty approved this plan and expressed full confidence in Jellicoe at the time (October 1914).
The stakes were high, the pressure on Jellicoe immense, and his caution certainly understandable. His judgement might have been that even 90% odds in favour were not good enough to bet the British Empire. Churchill said of the battle that Jellicoe "was the only man on either side who could have lost the war in an afternoon."
The criticism of Jellicoe also fails to sufficiently credit Scheer, who was determined to preserve his fleet by avoiding the full British battle line, and who showed great skill in effecting his escape.
Beatty's actions
On the other hand, some of Jellicoe's supporters condemned the actions of Beatty for the British failure to achieve a complete victory. Although Beatty was undeniably brave, his mismanagement of the initial encounter with Hipper's squadron and the High Seas Fleet cost him a considerable advantage in the first hours of the battle. His most glaring failure was in not providing Jellicoe with periodic information on the position, course, and speed of the High Seas Fleet. Beatty, aboard the battlecruiser Lion, left behind the four fast battleships of the 5th Battle Squadron – the most powerful warships in the world at the time – engaging with six ships when better control would have given him 10 against Hipper's five. Though Beatty's larger guns out-ranged Hipper's guns by thousands of yards, Beatty held his fire for 10 minutes and closed the German squadron until within range of the Germans' superior gunnery, under lighting conditions that favoured the Germans. Most of the British losses in tonnage occurred in Beatty's force.
Death toll
The total loss of life on both sides was 9,823 personnel: the British losses numbered 6,784 and the German 3,039. Counted among the British losses were two members of the Royal Australian Navy and one member of the Royal Canadian Navy. Six Australian nationals serving in the Royal Navy were also killed.
British
113,300 tons sunk:
Battlecruisers , ,
Armoured cruisers , ,
Flotilla leader
Destroyers , , , , , ,
German
62,300 tons sunk:
Battlecruiser
Pre-dreadnought
Light cruisers , , ,
Destroyers (Heavy torpedo-boats) , , , ,
Selected honours
The Victoria Cross is the highest military decoration awarded for valour "in the face of the enemy" to members of the British Empire armed forces. The Ordre pour le Mérite was the Kingdom of Prussia and consequently the German Empire's highest military order until the end of the First World War.
Pour le Mérite
Franz Hipper ()
Reinhard Scheer ()
Victoria Cross
The Hon. Edward Barry Stewart Bingham ()
John Travers Cornwell ()
Francis John William Harvey ()
Loftus William Jones ()
Status of the survivors and wrecks
In the years following the battle the wrecks were slowly discovered. Invincible was found by the Royal Navy minesweeper in 1919. After the Second World War some of the wrecks seem to have been commercially salvaged. For instance, the Hydrographic Office record for SMS Lützow (No.32344) shows that salvage operations were taking place on the wreck in 1960.
During 2000–2016 a series of diving and marine survey expeditions involving veteran shipwreck historian and archaeologist Innes McCartney located all of the wrecks sunk in the battle. It was discovered that over 60 per cent of them had suffered from metal theft. In 2003 McCartney led a detailed survey of the wrecks for the Channel 4 documentary "Clash of the Dreadnoughts". The film examined the last minutes of the lost ships and revealed for the first time how both 'P' and 'Q' turrets of Invincible had been blasted out of the ship and tossed into the sea before she broke in half. This was followed by the Channel 4 documentary "Jutland: WWI's Greatest Sea Battle", broadcast in May 2016, which showed how several of the major losses at Jutland had actually occurred and just how accurate the "Harper Record" actually was.
On the 90th anniversary of the battle, in 2006, the UK Ministry of Defence belatedly announced that the 14 British vessels lost in the battle were being designated as protected places under the Protection of Military Remains Act 1986. This legislation only affects British ships and citizens and in practical terms offers no real protection from non-British salvors of the wreck sites. In May 2016 a number of British newspapers named the Dutch salvage company "Friendship Offshore" as one of the main salvors of the Jutland wrecks in recent years and depicted leaked photographs revealing the extent of their activities on the wreck of Queen Mary.
The last surviving veteran of the battle, Henry Allingham, a British RAF (originally RNAS) airman, died on 18 July 2009, aged 113, by which time he was the oldest documented man in the world and one of the last surviving veterans of the whole war. Also among the combatants was the then 20-year-old Prince Albert, serving as a junior officer aboard HMS Collingwood. He was second in the line to the throne, but would become king as George VI following his brother Edward's abdication in 1936.
One ship from the battle survives and is still (in 2021) afloat: the light cruiser . Decommissioned in 2011, she is docked at the Alexandra Graving Dock in Belfast, Northern Ireland and is a museum ship.
Remembrance
The Battle of Jutland was annually celebrated as a great victory by the right wing in Weimar Germany. This victory was used to repress the memory of the German navy's initiation of the German Revolution of 1918–1919, as well as the memory of the defeat in World War I in general. (The celebrations of the Battle of Tannenberg played a similar role.) This is especially true for the city of Wilhelmshaven, where wreath-laying ceremonies and torch-lit parades were performed until the end of the 1960s.
In 1916 Contreadmiral Friedrich von Kühlwetter (1865–1931) wrote a detailed analysis of the battle and published it in a book under the title "Skagerrak" (first anonymously published), which was reprinted in large numbers until after WWII and had a huge influence in keeping the battle in public memory amongst Germans as it was not tainted by the ideology of the Third Reich. Kühlwetter built the School for Naval Officers at Mürwik near Flensburg, where he is still remembered.
In May 2016, the 100th-anniversary commemoration of the Battle of Jutland was held. On 29 May, a commemorative service was held at St Mary's Church, Wimbledon, where the ensign from HMS Inflexible is on permanent display. On 31 May, the main service was held at St Magnus Cathedral in Orkney, attended by the British prime minister, David Cameron, and the German president, Joachim Gauck, along with Princess Anne and Vice Admiral Sir Tim Laurence. A centennial exposition was held at the Deutsches Marinemuseum in Wilhemshaven from 29 May 2016 to 28 February 2017.
Film
Wrath of the Seas (Die versunkene Flotte), 1926, director Manfred Noa
See also
List of the largest artificial non-nuclear explosions
Sea War Museum Jutland
Naval warfare of World War I
Notes
Citations
Bibliography
Black, Jeremy. "Jutland's Place in History," Naval History (June 2016) 30#3 pp 16–21.
Corbett, Sir Julian. (2015) Maritime Operations In The Russo-Japanese War 1904-1905. Vol. 1, originally published Jan 1914. Naval Institute Press;
Corbett, Sir Julian. (2015) Maritime Operations In The Russo-Japanese War 1904-1905. Vol. 2, originally published Oct 1915. Naval Institute Press;
Costello, John (1976) Jutland 1916 with Terry Hughes
Friedman, Norman. (2013) Naval Firepower, Battleship Guns And Gunnery In The Dreadnaught Era. Seaforth Publishing;
Further reading
H.W. Fawcett & G.W.W. Hooper, RN (editors), The fighting at Jutland (abridged edition); the personal experiences of forty-five officers and men of the British Fleet London: MacMillan & Co, 1921
Lambert, Andrew. "Writing Writing the Battle: Jutland in Sir Julian Corbett's Naval Operations," Mariner's Mirror 103#2 (2017) 175–95, Historiography..
External links
WW1 Centenary News - Battle of Jutland
Jutland Centenary Initiative
Jutland Commemoration Exhibition
Interactive Map of Jutland Sailors
Beatty's official report
Jellicoe's official despatch
Jellicoe, extract from The Grand Fleet, published 1919
World War I Naval Combat – Despatches
Scheer, Germany's High Seas Fleet in the World War , published 1920
Henry Allingham Last known survivor of the Battle of Jutland
Jutland Casualties Listed by Ship
germannavalwarfare.info Some Original Documents from the British Admiralty, Room 40, regarding the Battle of Jutland: Photocopies from The National Archives, Kew, Richmond, UK.
Battle of Jutland Crew Lists Project
Memorial park for the Battle of Jutland
Battle of Jutland Crew Lists Project Wiki
Battle-of-Jutland.com The website owner has a package of original documents
Notable accounts
by Rudyard Kipling Retrieved 2009-10-31.
by Alexander Grant, a gunner aboard HMS Lion
A North Sea diary, 1914–1918, by Stephen King-Hall, a junior officer on the light cruiser
by Paul Berryman, a junior officer on
by Moritz von Egidy, captain of SMS Seydlitz
by Richard Foerster, gunnery officer on Seydlitz
by Georg von Hase, gunnery officer on Derfflinger
(Note:' Due to the time difference, entries in some of the German accounts are one hour ahead of the times in this article.)
Conflicts in 1916
1916 in Denmark
1916 in Germany
1916 in the United Kingdom
Naval battles of World War I involving Australia
Naval battles of World War I involving Germany
Naval battles of World War I involving the United Kingdom
North Sea operations of World War I
Protected Wrecks of the United Kingdom
Military history of the North Sea
May 1916 events
June 1916 events
Germany–United Kingdom military relations |
4594 | https://en.wikipedia.org/wiki/Block%20cipher | Block cipher | In cryptography, a block cipher is a deterministic algorithm operating on fixed-length groups of bits, called blocks. They are specified elementary components in the design of many cryptographic protocols and are widely used to the encryption of large amounts of data, including data exchange protocols. It uses blocks as an unvarying transformation.
Even a secure block cipher is suitable for the encryption of only a single block of data at a time, using a fixed key. A multitude of modes of operation have been designed to allow their repeated use in a secure way to achieve the security goals of confidentiality and authenticity. However, block ciphers may also feature as building blocks in other cryptographic protocols, such as universal hash functions and pseudorandom number generators.
Definition
A block cipher consists of two paired algorithms, one for encryption, , and the other for decryption, . Both algorithms accept two inputs: an input block of size bits and a key of size bits; and both yield an -bit output block. The decryption algorithm is defined to be the inverse function of encryption, i.e., . More formally, a block cipher is specified by an encryption function
which takes as input a key , of bit length (called the key size), and a bit string , of length (called the block size), and returns a string of bits. is called the plaintext, and is termed the ciphertext. For each , the function () is required to be an invertible mapping on . The inverse for is defined as a function
taking a key and a ciphertext to return a plaintext value , such that
For example, a block cipher encryption algorithm might take a 128-bit block of plaintext as input, and output a corresponding 128-bit block of ciphertext. The exact transformation is controlled using a second input – the secret key. Decryption is similar: the decryption algorithm takes, in this example, a 128-bit block of ciphertext together with the secret key, and yields the original 128-bit block of plain text.
For each key K, EK is a permutation (a bijective mapping) over the set of input blocks. Each key selects one permutation from the set of possible permutations.
History
The modern design of block ciphers is based on the concept of an iterated product cipher. In his seminal 1949 publication, Communication Theory of Secrecy Systems, Claude Shannon analyzed product ciphers and suggested them as a means of effectively improving security by combining simple operations such as substitutions and permutations. Iterated product ciphers carry out encryption in multiple rounds, each of which uses a different subkey derived from the original key. One widespread implementation of such ciphers, named a Feistel network after Horst Feistel, is notably implemented in the DES cipher. Many other realizations of block ciphers, such as the AES, are classified as substitution–permutation networks.
The root of all cryptographic block formats used within the Payment Card Industry Data Security Standard (PCI DSS) and American National Standards Institute (ANSI) standards lies with the Atalla Key Block (AKB), which was a key innovation of the Atalla Box, the first hardware security module (HSM). It was developed in 1972 by Mohamed M. Atalla, founder of Atalla Corporation (now Utimaco Atalla), and released in 1973. The AKB was a key block, which is required to securely interchange symmetric keys or PINs with other actors of the banking industry. This secure interchange is performed using the AKB format. The Atalla Box protected over 90% of all ATM networks in operation as of 1998, and Atalla products still secure the majority of the world's ATM transactions as of 2014.
The publication of the DES cipher by the United States National Bureau of Standards (subsequently the U.S. National Institute of Standards and Technology, NIST) in 1977 was fundamental in the public understanding of modern block cipher design. It also influenced the academic development of cryptanalytic attacks. Both differential and linear cryptanalysis arose out of studies on the DES design. there is a palette of attack techniques against which a block cipher must be secure, in addition to being robust against brute-force attacks.
Design
Iterated block ciphers
Most block cipher algorithms are classified as iterated block ciphers which means that they transform fixed-size blocks of plaintext into identically sized blocks of ciphertext, via the repeated application of an invertible transformation known as the round function, with each iteration referred to as a round.
Usually, the round function R takes different round keys Ki as second input, which are derived from the original key:
where is the plaintext and the ciphertext, with r being the number of rounds.
Frequently, key whitening is used in addition to this. At the beginning and the end, the data is modified with key material (often with XOR, but simple arithmetic operations like adding and subtracting are also used):
Given one of the standard iterated block cipher design schemes, it is fairly easy to construct a block cipher that is cryptographically secure, simply by using a large number of rounds. However, this will make the cipher inefficient. Thus, efficiency is the most important additional design criterion for professional ciphers. Further, a good block cipher is designed to avoid side-channel attacks, such as branch prediction and input-dependent memory accesses that might leak secret data via the cache state or the execution time. In addition, the cipher should be concise, for small hardware and software implementations. Finally, the cipher should be easily cryptanalyzable, such that it can be shown how many rounds the cipher needs to be reduced to, so that the existing cryptographic attacks would work – and, conversely, that it can be shown that the number of actual rounds is large enough to protect against them.
Substitution–permutation networks
One important type of iterated block cipher known as a substitution–permutation network (SPN) takes a block of the plaintext and the key as inputs, and applies several alternating rounds consisting of a substitution stage followed by a permutation stage—to produce each block of ciphertext output. The non-linear substitution stage mixes the key bits with those of the plaintext, creating Shannon's confusion. The linear permutation stage then dissipates redundancies, creating diffusion.
A substitution box (S-box) substitutes a small block of input bits with another block of output bits. This substitution must be one-to-one, to ensure invertibility (hence decryption). A secure S-box will have the property that changing one input bit will change about half of the output bits on average, exhibiting what is known as the avalanche effect—i.e. it has the property that each output bit will depend on every input bit.
A permutation box (P-box) is a permutation of all the bits: it takes the outputs of all the S-boxes of one round, permutes the bits, and feeds them into the S-boxes of the next round. A good P-box has the property that the output bits of any S-box are distributed to as many S-box inputs as possible.
At each round, the round key (obtained from the key with some simple operations, for instance, using S-boxes and P-boxes) is combined using some group operation, typically XOR.
Decryption is done by simply reversing the process (using the inverses of the S-boxes and P-boxes and applying the round keys in reversed order).
Feistel ciphers
In a Feistel cipher, the block of plain text to be encrypted is split into two equal-sized halves. The round function is applied to one half, using a subkey, and then the output is XORed with the other half. The two halves are then swapped.
Let be the round function and let
be the sub-keys for the rounds respectively.
Then the basic operation is as follows:
Split the plaintext block into two equal pieces, (, )
For each round , compute
.
Then the ciphertext is .
Decryption of a ciphertext is accomplished by computing for
.
Then is the plaintext again.
One advantage of the Feistel model compared to a substitution–permutation network is that the round function does not have to be invertible.
Lai–Massey ciphers
The Lai–Massey scheme offers security properties similar to those of the Feistel structure. It also shares its advantage that the round function does not have to be invertible. Another similarity is that it also splits the input block into two equal pieces. However, the round function is applied to the difference between the two, and the result is then added to both half blocks.
Let be the round function and a half-round function and let be the sub-keys for the rounds respectively.
Then the basic operation is as follows:
Split the plaintext block into two equal pieces, (, )
For each round , compute
where and
Then the ciphertext is .
Decryption of a ciphertext is accomplished by computing for
where and
Then is the plaintext again.
Operations
ARX (add–rotate–XOR)
Many modern block ciphers and hashes are ARX algorithms—their round function involves only three operations: (A) modular addition, (R) rotation with fixed rotation amounts, and (X) XOR. Examples include ChaCha20, Speck, XXTEA, and BLAKE. Many authors draw an ARX network, a kind of data flow diagram, to illustrate such a round function.
These ARX operations are popular because they are relatively fast and cheap in hardware and software, their implementation can be made extremely simple, and also because they run in constant time, and therefore are immune to timing attacks. The rotational cryptanalysis technique attempts to attack such round functions.
Other operations
Other operations often used in block ciphers include
data-dependent rotations as in RC5 and RC6,
a substitution box implemented as a lookup table as in Data Encryption Standard and Advanced Encryption Standard,
a permutation box,
and multiplication as in IDEA.
Modes of operation
A block cipher by itself allows encryption only of a single data block of the cipher's block length. For a variable-length message, the data must first be partitioned into separate cipher blocks. In the simplest case, known as electronic codebook (ECB) mode, a message is first split into separate blocks of the cipher's block size (possibly extending the last block with padding bits), and then each block is encrypted and decrypted independently. However, such a naive method is generally insecure because equal plaintext blocks will always generate equal ciphertext blocks (for the same key), so patterns in the plaintext message become evident in the ciphertext output.
To overcome this limitation, several so called block cipher modes of operation have been designed and specified in national recommendations such as NIST 800-38A and BSI TR-02102 and international standards such as ISO/IEC 10116. The general concept is to use randomization of the plaintext data based on an additional input value, frequently called an initialization vector, to create what is termed probabilistic encryption. In the popular cipher block chaining (CBC) mode, for encryption to be secure the initialization vector passed along with the plaintext message must be a random or pseudo-random value, which is added in an exclusive-or manner to the first plaintext block before it is being encrypted. The resultant ciphertext block is then used as the new initialization vector for the next plaintext block. In the cipher feedback (CFB) mode, which emulates a self-synchronizing stream cipher, the initialization vector is first encrypted and then added to the plaintext block. The output feedback (OFB) mode repeatedly encrypts the initialization vector to create a key stream for the emulation of a synchronous stream cipher. The newer counter (CTR) mode similarly creates a key stream, but has the advantage of only needing unique and not (pseudo-)random values as initialization vectors; the needed randomness is derived internally by using the initialization vector as a block counter and encrypting this counter for each block.
From a security-theoretic point of view, modes of operation must provide what is known as semantic security. Informally, it means that given some ciphertext under an unknown key one cannot practically derive any information from the ciphertext (other than the length of the message) over what one would have known without seeing the ciphertext. It has been shown that all of the modes discussed above, with the exception of the ECB mode, provide this property under so-called chosen plaintext attacks.
Padding
Some modes such as the CBC mode only operate on complete plaintext blocks. Simply extending the last block of a message with zero-bits is insufficient since it does not allow a receiver to easily distinguish messages that differ only in the amount of padding bits. More importantly, such a simple solution gives rise to very efficient padding oracle attacks. A suitable padding scheme is therefore needed to extend the last plaintext block to the cipher's block size. While many popular schemes described in standards and in the literature have been shown to be vulnerable to padding oracle attacks, a solution which adds a one-bit and then extends the last block with zero-bits, standardized as "padding method 2" in ISO/IEC 9797-1, has been proven secure against these attacks.
Cryptanalysis
Brute-force attacks
This property results in the cipher's security degrading quadratically, and needs to be taken into account when selecting a block size. There is a trade-off though as large block sizes can result in the algorithm becoming inefficient to operate. Earlier block ciphers such as the DES have typically selected a 64-bit block size, while newer designs such as the AES support block sizes of 128 bits or more, with some ciphers supporting a range of different block sizes.
Differential cryptanalysis
Linear cryptanalysis
Linear cryptanalysis is a form of cryptanalysis based on finding affine approximations to the action of a cipher. Linear cryptanalysis is one of the two most widely used attacks on block ciphers; the other being differential cryptanalysis.
The discovery is attributed to Mitsuru Matsui, who first applied the technique to the FEAL cipher (Matsui and Yamagishi, 1992).
Integral cryptanalysis
Integral cryptanalysis is a cryptanalytic attack that is particularly applicable to block ciphers based on substitution–permutation networks. Unlike differential cryptanalysis, which uses pairs of chosen plaintexts with a fixed XOR difference, integral cryptanalysis uses sets or even multisets of chosen plaintexts of which part is held constant and another part varies through all possibilities. For example, an attack might use 256 chosen plaintexts that have all but 8 of their bits the same, but all differ in those 8 bits. Such a set necessarily has an XOR sum of 0, and the XOR sums of the corresponding sets of ciphertexts provide information about the cipher's operation. This contrast between the differences of pairs of texts and the sums of larger sets of texts inspired the name "integral cryptanalysis", borrowing the terminology of calculus.
Other techniques
In addition to linear and differential cryptanalysis, there is a growing catalog of attacks: truncated differential cryptanalysis, partial differential cryptanalysis, integral cryptanalysis, which encompasses square and integral attacks, slide attacks, boomerang attacks, the XSL attack, impossible differential cryptanalysis and algebraic attacks. For a new block cipher design to have any credibility, it must demonstrate evidence of security against known attacks.
Provable security
When a block cipher is used in a given mode of operation, the resulting algorithm should ideally be about as secure as the block cipher itself. ECB (discussed above) emphatically lacks this property: regardless of how secure the underlying block cipher is, ECB mode can easily be attacked. On the other hand, CBC mode can be proven to be secure under the assumption that the underlying block cipher is likewise secure. Note, however, that making statements like this requires formal mathematical definitions for what it means for an encryption algorithm or a block cipher to "be secure". This section describes two common notions for what properties a block cipher should have. Each corresponds to a mathematical model that can be used to prove properties of higher level algorithms, such as CBC.
This general approach to cryptography – proving higher-level algorithms (such as CBC) are secure under explicitly stated assumptions regarding their components (such as a block cipher) – is known as provable security.
Standard model
Informally, a block cipher is secure in the standard model if an attacker cannot tell the difference between the block cipher (equipped with a random key) and a random permutation.
To be a bit more precise, let E be an n-bit block cipher. We imagine the following game:
The person running the game flips a coin.
If the coin lands on heads, he chooses a random key K and defines the function f = EK.
If the coin lands on tails, he chooses a random permutation on the set of n-bit strings, and defines the function f = .
The attacker chooses an n-bit string X, and the person running the game tells him the value of f(X).
Step 2 is repeated a total of q times. (Each of these q interactions is a query.)
The attacker guesses how the coin landed. He wins if his guess is correct.
The attacker, which we can model as an algorithm, is called an adversary. The function f (which the adversary was able to query) is called an oracle.
Note that an adversary can trivially ensure a 50% chance of winning simply by guessing at random (or even by, for example, always guessing "heads"). Therefore, let PE(A) denote the probability that the adversary A wins this game against E, and define the advantage of A as 2(PE(A) − 1/2). It follows that if A guesses randomly, its advantage will be 0; on the other hand, if A always wins, then its advantage is 1. The block cipher E is a pseudo-random permutation (PRP) if no adversary has an advantage significantly greater than 0, given specified restrictions on q and the adversary's running time. If in Step 2 above adversaries have the option of learning f−1(X) instead of f(X) (but still have only small advantages) then E is a strong PRP (SPRP). An adversary is non-adaptive if it chooses all q values for X before the game begins (that is, it does not use any information gleaned from previous queries to choose each X as it goes).
These definitions have proven useful for analyzing various modes of operation. For example, one can define a similar game for measuring the security of a block cipher-based encryption algorithm, and then try to show (through a reduction argument) that the probability of an adversary winning this new game is not much more than PE(A) for some A. (The reduction typically provides limits on q and the running time of A.) Equivalently, if PE(A) is small for all relevant A, then no attacker has a significant probability of winning the new game. This formalizes the idea that the higher-level algorithm inherits the block cipher's security.
Ideal cipher model
Practical evaluation
Block ciphers may be evaluated according to multiple criteria in practice. Common factors include:
Key parameters, such as its key size and block size, both of which provide an upper bound on the security of the cipher.
The estimated security level, which is based on the confidence gained in the block cipher design after it has largely withstood major efforts in cryptanalysis over time, the design's mathematical soundness, and the existence of practical or certificational attacks.
The cipher's complexity and its suitability for implementation in hardware or software. Hardware implementations may measure the complexity in terms of gate count or energy consumption, which are important parameters for resource-constrained devices.
The cipher's performance in terms of processing throughput on various platforms, including its memory requirements.
The cost of the cipher, which refers to licensing requirements that may apply due to intellectual property rights.
The flexibility of the cipher, which includes its ability to support multiple key sizes and block lengths.
Notable block ciphers
Lucifer / DES
Lucifer is generally considered to be the first civilian block cipher, developed at IBM in the 1970s based on work done by Horst Feistel. A revised version of the algorithm was adopted as a U.S. government Federal Information Processing Standard: FIPS PUB 46 Data Encryption Standard (DES). It was chosen by the U.S. National Bureau of Standards (NBS) after a public invitation for submissions and some internal changes by NBS (and, potentially, the NSA). DES was publicly released in 1976 and has been widely used.
DES was designed to, among other things, resist a certain cryptanalytic attack known to the NSA and rediscovered by IBM, though unknown publicly until rediscovered again and published by Eli Biham and Adi Shamir in the late 1980s. The technique is called differential cryptanalysis and remains one of the few general attacks against block ciphers; linear cryptanalysis is another, but may have been unknown even to the NSA, prior to its publication by Mitsuru Matsui. DES prompted a large amount of other work and publications in cryptography and cryptanalysis in the open community and it inspired many new cipher designs.
DES has a block size of 64 bits and a key size of 56 bits. 64-bit blocks became common in block cipher designs after DES. Key length depended on several factors, including government regulation. Many observers in the 1970s commented that the 56-bit key length used for DES was too short. As time went on, its inadequacy became apparent, especially after a special purpose machine designed to break DES was demonstrated in 1998 by the Electronic Frontier Foundation. An extension to DES, Triple DES, triple-encrypts each block with either two independent keys (112-bit key and 80-bit security) or three independent keys (168-bit key and 112-bit security). It was widely adopted as a replacement. As of 2011, the three-key version is still considered secure, though the National Institute of Standards and Technology (NIST) standards no longer permit the use of the two-key version in new applications, due to its 80-bit security level.
IDEA
The International Data Encryption Algorithm (IDEA) is a block cipher designed by James Massey of ETH Zurich and Xuejia Lai; it was first described in 1991, as an intended replacement for DES.
IDEA operates on 64-bit blocks using a 128-bit key, and consists of a series of eight identical transformations (a round) and an output transformation (the half-round). The processes for encryption and decryption are similar. IDEA derives much of its security by interleaving operations from different groups – modular addition and multiplication, and bitwise exclusive or (XOR) – which are algebraically "incompatible" in some sense.
The designers analysed IDEA to measure its strength against differential cryptanalysis and concluded that it is immune under certain assumptions. No successful linear or algebraic weaknesses have been reported. , the best attack which applies to all keys can break full 8.5-round IDEA using a narrow-bicliques attack about four times faster than brute force.
RC5
RC5 is a block cipher designed by Ronald Rivest in 1994 which, unlike many other ciphers, has a variable block size (32, 64 or 128 bits), key size (0 to 2040 bits) and number of rounds (0 to 255). The original suggested choice of parameters were a block size of 64 bits, a 128-bit key and 12 rounds.
A key feature of RC5 is the use of data-dependent rotations; one of the goals of RC5 was to prompt the study and evaluation of such operations as a cryptographic primitive. RC5 also consists of a number of modular additions and XORs. The general structure of the algorithm is a Feistel-like network. The encryption and decryption routines can be specified in a few lines of code. The key schedule, however, is more complex, expanding the key using an essentially one-way function with the binary expansions of both e and the golden ratio as sources of "nothing up my sleeve numbers". The tantalising simplicity of the algorithm together with the novelty of the data-dependent rotations has made RC5 an attractive object of study for cryptanalysts.
12-round RC5 (with 64-bit blocks) is susceptible to a differential attack using 244 chosen plaintexts. 18–20 rounds are suggested as sufficient protection.
Rijndael / AES
The Rijndael cipher developed by Belgian cryptographers, Joan Daemen and Vincent Rijmen was one of the competing designs to replace DES. It won the 5-year public competition to become the AES, (Advanced Encryption Standard).
Adopted by NIST in 2001, AES has a fixed block size of 128 bits and a key size of 128, 192, or 256 bits, whereas Rijndael can be specified with block and key sizes in any multiple of 32 bits, with a minimum of 128 bits. The blocksize has a maximum of 256 bits, but the keysize has no theoretical maximum. AES operates on a 4×4 column-major order matrix of bytes, termed the state (versions of Rijndael with a larger block size have additional columns in the state).
Blowfish
Blowfish is a block cipher, designed in 1993 by Bruce Schneier and included in a large number of cipher suites and encryption products. Blowfish has a 64-bit block size and a variable key length from 1 bit up to 448 bits. It is a 16-round Feistel cipher and uses large key-dependent S-boxes. Notable features of the design include the key-dependent S-boxes and a highly complex key schedule.
It was designed as a general-purpose algorithm, intended as an alternative to the ageing DES and free of the problems and constraints associated with other algorithms. At the time Blowfish was released, many other designs were proprietary, encumbered by patents or were commercial/government secrets. Schneier has stated that, "Blowfish is unpatented, and will remain so in all countries. The algorithm is hereby placed in the public domain, and can be freely used by anyone." The same applies to Twofish, a successor algorithm from Schneier.
Generalizations
Tweakable block ciphers
M. Liskov, R. Rivest, and D. Wagner have described a generalized version of block ciphers called "tweakable" block ciphers. A tweakable block cipher accepts a second input called the tweak along with its usual plaintext or ciphertext input. The tweak, along with the key, selects the permutation computed by the cipher. If changing tweaks is sufficiently lightweight (compared with a usually fairly expensive key setup operation), then some interesting new operation modes become possible. The disk encryption theory article describes some of these modes.
Format-preserving encryption
Block ciphers traditionally work over a binary alphabet. That is, both the input and the output are binary strings, consisting of n zeroes and ones. In some situations, however, one may wish to have a block cipher that works over some other alphabet; for example, encrypting 16-digit credit card numbers in such a way that the ciphertext is also a 16-digit number might facilitate adding an encryption layer to legacy software. This is an example of format-preserving encryption. More generally, format-preserving encryption requires a keyed permutation on some finite language. This makes format-preserving encryption schemes a natural generalization of (tweakable) block ciphers. In contrast, traditional encryption schemes, such as CBC, are not permutations because the same plaintext can encrypt to multiple different ciphertexts, even when using a fixed key.
Relation to other cryptographic primitives
Block ciphers can be used to build other cryptographic primitives, such as those below. For these other primitives to be cryptographically secure, care has to be taken to build them the right way.
Stream ciphers can be built using block ciphers. OFB-mode and CTR mode are block modes that turn a block cipher into a stream cipher.
Cryptographic hash functions can be built using block ciphers. See one-way compression function for descriptions of several such methods. The methods resemble the block cipher modes of operation usually used for encryption.
Cryptographically secure pseudorandom number generators (CSPRNGs) can be built using block ciphers.
Secure pseudorandom permutations of arbitrarily sized finite sets can be constructed with block ciphers; see Format-Preserving Encryption.
A publicly known unpredictable permutation combined with key whitening is enough to construct a block cipher -- such as the single-key Even-Mansour cipher, perhaps the simplest possible provably secure block cipher.
Message authentication codes (MACs) are often built from block ciphers. CBC-MAC, OMAC and PMAC are such MACs.
Authenticated encryption is also built from block ciphers. It means to both encrypt and MAC at the same time. That is to both provide confidentiality and authentication. CCM, EAX, GCM and OCB are such authenticated encryption modes.
Just as block ciphers can be used to build hash functions, hash functions can be used to build block ciphers. Examples of such block ciphers are SHACAL, BEAR and LION.
See also
Cipher security summary
Topics in cryptography
XOR cipher
References
Further reading
External links
A list of many symmetric algorithms, the majority of which are block ciphers.
The block cipher lounge
What is a block cipher? from RSA FAQ
Block Cipher based on Gold Sequences and Chaotic Logistic Tent System
Cryptographic primitives
Arab inventions
Egyptian inventions |
5244 | https://en.wikipedia.org/wiki/Cipher | Cipher | In cryptography, a cipher (or cypher) is an algorithm for performing encryption or decryption—a series of well-defined steps that can be followed as a procedure. An alternative, less common term is encipherment. To encipher or encode is to convert information into cipher or code. In common parlance, "cipher" is synonymous with "code", as they are both a set of steps that encrypt a message; however, the concepts are distinct in cryptography, especially classical cryptography.
Codes generally substitute different length strings of character in the output, while ciphers generally substitute the same number of characters as are input. There are exceptions and some cipher systems may use slightly more, or fewer, characters when output versus the number that were input.
Codes operated by substituting according to a large codebook which linked a random string of characters or numbers to a word or phrase. For example, "UQJHSE" could be the code for "Proceed to the following coordinates." When using a cipher the original information is known as plaintext, and the encrypted form as ciphertext. The ciphertext message contains all the information of the plaintext message, but is not in a format readable by a human or computer without the proper mechanism to decrypt it.
The operation of a cipher usually depends on a piece of auxiliary information, called a key (or, in traditional NSA parlance, a cryptovariable). The encrypting procedure is varied depending on the key, which changes the detailed operation of the algorithm. A key must be selected before using a cipher to encrypt a message. Without knowledge of the key, it should be extremely difficult, if not impossible, to decrypt the resulting ciphertext into readable plaintext.
Most modern ciphers can be categorized in several ways
By whether they work on blocks of symbols usually of a fixed size (block ciphers), or on a continuous stream of symbols (stream ciphers).
By whether the same key is used for both encryption and decryption (symmetric key algorithms), or if a different key is used for each (asymmetric key algorithms). If the algorithm is symmetric, the key must be known to the recipient and sender and to no one else. If the algorithm is an asymmetric one, the enciphering key is different from, but closely related to, the deciphering key. If one key cannot be deduced from the other, the asymmetric key algorithm has the public/private key property and one of the keys may be made public without loss of confidentiality.
Etymology
The Roman number system was very cumbersome, in part because there was no concept of zero. The Arabic numeral system spread from the Arabic world to Europe in the Middle Ages. In this transition, the Arabic word for zero صفر (sifr) was adopted into Medieval Latin as cifra, and then into Middle French as . This eventually led to the English word cipher (minority spelling cypher). One theory for how the term came to refer to encoding is that the concept of zero was confusing to Europeans, and so the term came to refer to a message or communication that was not easily understood.
The term cipher was later also used to refer to any Arabic digit, or to calculation using them, so encoding text in the form of Arabic numerals is literally converting the text to "ciphers".
Versus codes
In non-technical usage, a "(secret) code" typically means a "cipher". Within technical discussions, however, the words "code" and "cipher" refer to two different concepts. Codes work at the level of meaning—that is, words or phrases are converted into something else and this chunking generally shortens the message.
An example of this is the commercial telegraph code which was used to shorten long telegraph messages which resulted from entering into commercial contracts using exchanges of telegrams.
Another example is given by whole word ciphers, which allow the user to replace an entire word with a symbol or character, much like the way Japanese utilize Kanji (meaning Chinese characters in Japanese) characters to supplement their language. ex "The quick brown fox jumps over the lazy dog" becomes "The quick brown 狐 jumps 上 the lazy 犬".
Ciphers, on the other hand, work at a lower level: the level of individual letters, small groups of letters, or, in modern schemes, individual bits and blocks of bits. Some systems used both codes and ciphers in one system, using superencipherment to increase the security. In some cases the terms codes and ciphers are also used synonymously to substitution and transposition.
Historically, cryptography was split into a dichotomy of codes and ciphers; and coding had its own terminology, analogous to that for ciphers: "encoding, codetext, decoding" and so on.
However, codes have a variety of drawbacks, including susceptibility to cryptanalysis and the difficulty of managing a cumbersome codebook. Because of this, codes have fallen into disuse in modern cryptography, and ciphers are the dominant technique.
Types
There are a variety of different types of encryption. Algorithms used earlier in the history of cryptography are substantially different from modern methods, and modern ciphers can be classified according to how they operate and whether they use one or two keys.
Historical
Historical pen and paper ciphers used in the past are sometimes known as classical ciphers. They include simple substitution ciphers (such as ROT13) and transposition ciphers (such as a Rail Fence Cipher). For example, "GOOD DOG" can be encrypted as "PLLX XLP" where "L" substitutes for "O", "P" for "G", and "X" for "D" in the message. Transposition of the letters "GOOD DOG" can result in "DGOGDOO". These simple ciphers and examples are easy to crack, even without plaintext-ciphertext pairs.
Simple ciphers were replaced by polyalphabetic substitution ciphers (such as the Vigenère) which changed the substitution alphabet for every letter. For example, "GOOD DOG" can be encrypted as "PLSX TWF" where "L", "S", and "W" substitute for "O". With even a small amount of known or estimated plaintext, simple polyalphabetic substitution ciphers and letter transposition ciphers designed for pen and paper encryption are easy to crack. It is possible to create a secure pen and paper cipher based on a one-time pad though, but the usual disadvantages of one-time pads apply.
During the early twentieth century, electro-mechanical machines were invented to do encryption and decryption using transposition, polyalphabetic substitution, and a kind of "additive" substitution. In rotor machines, several rotor disks provided polyalphabetic substitution, while plug boards provided another substitution. Keys were easily changed by changing the rotor disks and the plugboard wires. Although these encryption methods were more complex than previous schemes and required machines to encrypt and decrypt, other machines such as the British Bombe were invented to crack these encryption methods.
Modern
Modern encryption methods can be divided by two criteria: by type of key used, and by type of input data.
By type of key used ciphers are divided into:
symmetric key algorithms (Private-key cryptography), where one same key is used for encryption and decryption, and
asymmetric key algorithms (Public-key cryptography), where two different keys are used for encryption and decryption.
In a symmetric key algorithm (e.g., DES and AES), the sender and receiver must have a shared key set up in advance and kept secret from all other parties; the sender uses this key for encryption, and the receiver uses the same key for decryption. The Feistel cipher uses a combination of substitution and transposition techniques. Most block cipher algorithms are based on this structure. In an asymmetric key algorithm (e.g., RSA), there are two separate keys: a public key is published and enables any sender to perform encryption, while a private key is kept secret by the receiver and enables only that person to perform correct decryption.
Ciphers can be distinguished into two types by the type of input data:
block ciphers, which encrypt block of data of fixed size, and
stream ciphers, which encrypt continuous streams of data.
Key size and vulnerability
In a pure mathematical attack, (i.e., lacking any other information to help break a cipher) two factors above all count:
Computational power available, i.e., the computing power which can be brought to bear on the problem. It is important to note that average performance/capacity of a single computer is not the only factor to consider. An adversary can use multiple computers at once, for instance, to increase the speed of exhaustive search for a key (i.e., "brute force" attack) substantially.
Key size, i.e., the size of key used to encrypt a message. As the key size increases, so does the complexity of exhaustive search to the point where it becomes impractical to crack encryption directly.
Since the desired effect is computational difficulty, in theory one would choose an algorithm and desired difficulty level, thus decide the key length accordingly.
An example of this process can be found at Key Length which uses multiple reports to suggest that a symmetrical cipher with 128 bits, an asymmetric cipher with 3072 bit keys, and an elliptic curve cipher with 512 bits, all have similar difficulty at present.
Claude Shannon proved, using information theory considerations, that any theoretically unbreakable cipher must have keys which are at least as long as the plaintext, and used only once: one-time pad.
See also
Autokey cipher
Cover-coding
Encryption software
List of ciphertexts
Steganography
Telegraph code
Notes
References
Richard J. Aldrich, GCHQ: The Uncensored Story of Britain's Most Secret Intelligence Agency, HarperCollins July 2010.
Helen Fouché Gaines, "Cryptanalysis", 1939, Dover.
Ibrahim A. Al-Kadi, "The origins of cryptology: The Arab contributions", Cryptologia, 16(2) (April 1992) pp. 97–126.
David Kahn, The Codebreakers - The Story of Secret Writing () (1967)
David A. King, The ciphers of the monks - A forgotten number notation of the Middle Ages, Stuttgart: Franz Steiner, 2001 ()
Abraham Sinkov, Elementary Cryptanalysis: A Mathematical Approach, Mathematical Association of America, 1966.
William Stallings, ''Cryptography and Network Security, principles and practices, 4th Edition
External links
Kish cypher
Cryptography |
5300 | https://en.wikipedia.org/wiki/Computer%20data%20storage | Computer data storage | Computer data storage is a technology consisting of computer components and recording media that are used to retain digital data. It is a core function and fundamental component of computers.
The central processing unit (CPU) of a computer is what manipulates data by performing computations. In practice, almost all computers use a storage hierarchy, which puts fast but expensive and small storage options close to the CPU and slower but less expensive and larger options further away. Generally the fast volatile technologies (which lose data when off power) are referred to as "memory", while slower persistent technologies are referred to as "storage".
Even the first computer designs, Charles Babbage's Analytical Engine and Percy Ludgate's Analytical Machine, clearly distinguished between processing and memory (Babbage stored numbers as rotations of gears, while Ludgate stored numbers as displacements of rods in shuttles). This distinction was extended in the Von Neumann architecture, where the CPU consists of two main parts: The control unit and the arithmetic logic unit (ALU). The former controls the flow of data between the CPU and memory, while the latter performs arithmetic and logical operations on data.
Functionality
Without a significant amount of memory, a computer would merely be able to perform fixed operations and immediately output the result. It would have to be reconfigured to change its behavior. This is acceptable for devices such as desk calculators, digital signal processors, and other specialized devices. Von Neumann machines differ in having a memory in which they store their operating instructions and data. Such computers are more versatile in that they do not need to have their hardware reconfigured for each new program, but can simply be reprogrammed with new in-memory instructions; they also tend to be simpler to design, in that a relatively simple processor may keep state between successive computations to build up complex procedural results. Most modern computers are von Neumann machines.
Data organization and representation
A modern digital computer represents data using the binary numeral system. Text, numbers, pictures, audio, and nearly any other form of information can be converted into a string of bits, or binary digits, each of which has a value of 0 or 1. The most common unit of storage is the byte, equal to 8 bits. A piece of information can be handled by any computer or device whose storage space is large enough to accommodate the binary representation of the piece of information, or simply data. For example, the complete works of Shakespeare, about 1250 pages in print, can be stored in about five megabytes (40 million bits) with one byte per character.
Data are encoded by assigning a bit pattern to each character, digit, or multimedia object. Many standards exist for encoding (e.g. character encodings like ASCII, image encodings like JPEG, video encodings like MPEG-4).
By adding bits to each encoded unit, redundancy allows the computer to both detect errors in coded data and correct them based on mathematical algorithms. Errors generally occur in low probabilities due to random bit value flipping, or "physical bit fatigue", loss of the physical bit in storage of its ability to maintain a distinguishable value (0 or 1), or due to errors in inter or intra-computer communication. A random bit flip (e.g. due to random radiation) is typically corrected upon detection. A bit, or a group of malfunctioning physical bits (not always the specific defective bit is known; group definition depends on specific storage device) is typically automatically fenced-out, taken out of use by the device, and replaced with another functioning equivalent group in the device, where the corrected bit values are restored (if possible). The cyclic redundancy check (CRC) method is typically used in communications and storage for error detection. A detected error is then retried.
Data compression methods allow in many cases (such as a database) to represent a string of bits by a shorter bit string ("compress") and reconstruct the original string ("decompress") when needed. This utilizes substantially less storage (tens of percents) for many types of data at the cost of more computation (compress and decompress when needed). Analysis of trade-off between storage cost saving and costs of related computations and possible delays in data availability is done before deciding whether to keep certain data compressed or not.
For security reasons, certain types of data (e.g. credit-card information) may be kept encrypted in storage to prevent the possibility of unauthorized information reconstruction from chunks of storage snapshots.
Hierarchy of storage
Generally, the lower a storage is in the hierarchy, the lesser its bandwidth and the greater its access latency is from the CPU. This traditional division of storage to primary, secondary, tertiary, and off-line storage is also guided by cost per bit.
In contemporary usage, memory is usually semiconductor storage read-write random-access memory, typically DRAM (dynamic RAM) or other forms of fast but temporary storage. Storage consists of storage devices and their media not directly accessible by the CPU (secondary or tertiary storage), typically hard disk drives, optical disc drives, and other devices slower than RAM but non-volatile (retaining contents when powered down).
Historically, memory has been called core memory, main memory, real storage, or internal memory. Meanwhile, non-volatile storage devices have been referred to as secondary storage, external memory, or auxiliary/peripheral storage.
Primary storage
Primary storage (also known as main memory, internal memory, or prime memory), often referred to simply as memory, is the only one directly accessible to the CPU. The CPU continuously reads instructions stored there and executes them as required. Any data actively operated on is also stored there in uniform manner.
Historically, early computers used delay lines, Williams tubes, or rotating magnetic drums as primary storage. By 1954, those unreliable methods were mostly replaced by magnetic-core memory. Core memory remained dominant until the 1970s, when advances in integrated circuit technology allowed semiconductor memory to become economically competitive.
This led to modern random-access memory (RAM). It is small-sized, light, but quite expensive at the same time. The particular types of RAM used for primary storage are volatile, meaning that they lose the information when not powered. Besides storing opened programs, it serves as disk cache and write buffer to improve both reading and writing performance. Operating systems borrow RAM capacity for caching so long as not needed by running software. Spare memory can be utilized as RAM drive for temporary high-speed data storage.
As shown in the diagram, traditionally there are two more sub-layers of the primary storage, besides main large-capacity RAM:
Processor registers are located inside the processor. Each register typically holds a word of data (often 32 or 64 bits). CPU instructions instruct the arithmetic logic unit to perform various calculations or other operations on this data (or with the help of it). Registers are the fastest of all forms of computer data storage.
Processor cache is an intermediate stage between ultra-fast registers and much slower main memory. It was introduced solely to improve the performance of computers. Most actively used information in the main memory is just duplicated in the cache memory, which is faster, but of much lesser capacity. On the other hand, main memory is much slower, but has a much greater storage capacity than processor registers. Multi-level hierarchical cache setup is also commonly used—primary cache being smallest, fastest and located inside the processor; secondary cache being somewhat larger and slower.
Main memory is directly or indirectly connected to the central processing unit via a memory bus. It is actually two buses (not on the diagram): an address bus and a data bus. The CPU firstly sends a number through an address bus, a number called memory address, that indicates the desired location of data. Then it reads or writes the data in the memory cells using the data bus. Additionally, a memory management unit (MMU) is a small device between CPU and RAM recalculating the actual memory address, for example to provide an abstraction of virtual memory or other tasks.
As the RAM types used for primary storage are volatile (uninitialized at start up), a computer containing only such storage would not have a source to read instructions from, in order to start the computer. Hence, non-volatile primary storage containing a small startup program (BIOS) is used to bootstrap the computer, that is, to read a larger program from non-volatile secondary storage to RAM and start to execute it. A non-volatile technology used for this purpose is called ROM, for read-only memory (the terminology may be somewhat confusing as most ROM types are also capable of random access).
Many types of "ROM" are not literally read only, as updates to them are possible; however it is slow and memory must be erased in large portions before it can be re-written. Some embedded systems run programs directly from ROM (or similar), because such programs are rarely changed. Standard computers do not store non-rudimentary programs in ROM, and rather, use large capacities of secondary storage, which is non-volatile as well, and not as costly.
Recently, primary storage and secondary storage in some uses refer to what was historically called, respectively, secondary storage and tertiary storage.
Secondary storage
Secondary storage (also known as external memory or auxiliary storage) differs from primary storage in that it is not directly accessible by the CPU. The computer usually uses its input/output channels to access secondary storage and transfer the desired data to primary storage. Secondary storage is non-volatile (retaining data when its power is shut off). Modern computer systems typically have two orders of magnitude more secondary storage than primary storage because secondary storage is less expensive.
In modern computers, hard disk drives (HDDs) or solid-state drives (SSDs) are usually used as secondary storage. The access time per byte for HDDs or SSDs is typically measured in milliseconds (one thousandth seconds), while the access time per byte for primary storage is measured in nanoseconds (one billionth seconds). Thus, secondary storage is significantly slower than primary storage. Rotating optical storage devices, such as CD and DVD drives, have even longer access times. Other examples of secondary storage technologies include USB flash drives, floppy disks, magnetic tape, paper tape, punched cards, and RAM disks.
Once the disk read/write head on HDDs reaches the proper placement and the data, subsequent data on the track are very fast to access. To reduce the seek time and rotational latency, data are transferred to and from disks in large contiguous blocks. Sequential or block access on disks is orders of magnitude faster than random access, and many sophisticated paradigms have been developed to design efficient algorithms based upon sequential and block access. Another way to reduce the I/O bottleneck is to use multiple disks in parallel in order to increase the bandwidth between primary and secondary memory.
Secondary storage is often formatted according to a file system format, which provides the abstraction necessary to organize data into files and directories, while also providing metadata describing the owner of a certain file, the access time, the access permissions, and other information.
Most computer operating systems use the concept of virtual memory, allowing utilization of more primary storage capacity than is physically available in the system. As the primary memory fills up, the system moves the least-used chunks (pages) to a swap file or page file on secondary storage, retrieving them later when needed. If a lot of pages are moved to slower secondary storage, the system performance is degraded.
Tertiary storage
Tertiary storage or tertiary memory is a level below secondary storage. Typically, it involves a robotic mechanism which will mount (insert) and dismount removable mass storage media into a storage device according to the system's demands; such data are often copied to secondary storage before use. It is primarily used for archiving rarely accessed information since it is much slower than secondary storage (e.g. 5–60 seconds vs. 1–10 milliseconds). This is primarily useful for extraordinarily large data stores, accessed without human operators. Typical examples include tape libraries and optical jukeboxes.
When a computer needs to read information from the tertiary storage, it will first consult a catalog database to determine which tape or disc contains the information. Next, the computer will instruct a robotic arm to fetch the medium and place it in a drive. When the computer has finished reading the information, the robotic arm will return the medium to its place in the library.
Tertiary storage is also known as nearline storage because it is "near to online". The formal distinction between online, nearline, and offline storage is:
Online storage is immediately available for I/O.
Nearline storage is not immediately available, but can be made online quickly without human intervention.
Offline storage is not immediately available, and requires some human intervention to become online.
For example, always-on spinning hard disk drives are online storage, while spinning drives that spin down automatically, such as in massive arrays of idle disks (MAID), are nearline storage. Removable media such as tape cartridges that can be automatically loaded, as in tape libraries, are nearline storage, while tape cartridges that must be manually loaded are offline storage.
Off-line storage
Off-line storage is a computer data storage on a medium or a device that is not under the control of a processing unit. The medium is recorded, usually in a secondary or tertiary storage device, and then physically removed or disconnected. It must be inserted or connected by a human operator before a computer can access it again. Unlike tertiary storage, it cannot be accessed without human interaction.
Off-line storage is used to transfer information, since the detached medium can easily be physically transported. Additionally, it is useful for cases of disaster, where, for example, a fire destroys the original data, a medium in a remote location will be unaffected, enabling disaster recovery. Off-line storage increases general information security, since it is physically inaccessible from a computer, and data confidentiality or integrity cannot be affected by computer-based attack techniques. Also, if the information stored for archival purposes is rarely accessed, off-line storage is less expensive than tertiary storage.
In modern personal computers, most secondary and tertiary storage media are also used for off-line storage. Optical discs and flash memory devices are most popular, and to much lesser extent removable hard disk drives. In enterprise uses, magnetic tape is predominant. Older examples are floppy disks, Zip disks, or punched cards.
Characteristics of storage
Storage technologies at all levels of the storage hierarchy can be differentiated by evaluating certain core characteristics as well as measuring characteristics specific to a particular implementation. These core characteristics are volatility, mutability, accessibility, and addressability. For any particular implementation of any storage technology, the characteristics worth measuring are capacity and performance.
Volatility
Non-volatile memory retains the stored information even if not constantly supplied with electric power. It is suitable for long-term storage of information. Volatile memory requires constant power to maintain the stored information. The fastest memory technologies are volatile ones, although that is not a universal rule. Since the primary storage is required to be very fast, it predominantly uses volatile memory.
Dynamic random-access memory is a form of volatile memory that also requires the stored information to be periodically reread and rewritten, or refreshed, otherwise it would vanish. Static random-access memory is a form of volatile memory similar to DRAM with the exception that it never needs to be refreshed as long as power is applied; it loses its content when the power supply is lost.
An uninterruptible power supply (UPS) can be used to give a computer a brief window of time to move information from primary volatile storage into non-volatile storage before the batteries are exhausted. Some systems, for example EMC Symmetrix, have integrated batteries that maintain volatile storage for several minutes.
Mutability
Read/write storage or mutable storage Allows information to be overwritten at any time. A computer without some amount of read/write storage for primary storage purposes would be useless for many tasks. Modern computers typically use read/write storage also for secondary storage.
Slow write, fast read storage Read/write storage which allows information to be overwritten multiple times, but with the write operation being much slower than the read operation. Examples include CD-RW and SSD.
Write once storage Write once read many (WORM) allows the information to be written only once at some point after manufacture. Examples include semiconductor programmable read-only memory and CD-R.
Read only storage Retains the information stored at the time of manufacture. Examples include mask ROM ICs and CD-ROM.
Accessibility
Random access Any location in storage can be accessed at any moment in approximately the same amount of time. Such characteristic is well suited for primary and secondary storage. Most semiconductor memories and disk drives provide random access, though only flash memory supports random access without latency, as no mechanical parts need to be moved.
Sequential access The accessing of pieces of information will be in a serial order, one after the other; therefore the time to access a particular piece of information depends upon which piece of information was last accessed. Such characteristic is typical of off-line storage.
Addressability
Location-addressable Each individually accessible unit of information in storage is selected with its numerical memory address. In modern computers, location-addressable storage usually limits to primary storage, accessed internally by computer programs, since location-addressability is very efficient, but burdensome for humans.
File addressable Information is divided into files of variable length, and a particular file is selected with human-readable directory and file names. The underlying device is still location-addressable, but the operating system of a computer provides the file system abstraction to make the operation more understandable. In modern computers, secondary, tertiary and off-line storage use file systems.
Content-addressable Each individually accessible unit of information is selected based on the basis of (part of) the contents stored there. Content-addressable storage can be implemented using software (computer program) or hardware (computer device), with hardware being faster but more expensive option. Hardware content addressable memory is often used in a computer's CPU cache.
Capacity
Raw capacity The total amount of stored information that a storage device or medium can hold. It is expressed as a quantity of bits or bytes (e.g. 10.4 megabytes).
Memory storage density The compactness of stored information. It is the storage capacity of a medium divided with a unit of length, area or volume (e.g. 1.2 megabytes per square inch).
Performance
Latency The time it takes to access a particular location in storage. The relevant unit of measurement is typically nanosecond for primary storage, millisecond for secondary storage, and second for tertiary storage. It may make sense to separate read latency and write latency (especially for non-volatile memory) and in case of sequential access storage, minimum, maximum and average latency.
Throughput The rate at which information can be read from or written to the storage. In computer data storage, throughput is usually expressed in terms of megabytes per second (MB/s), though bit rate may also be used. As with latency, read rate and write rate may need to be differentiated. Also accessing media sequentially, as opposed to randomly, typically yields maximum throughput.
Granularity The size of the largest "chunk" of data that can be efficiently accessed as a single unit, e.g. without introducing additional latency.
Reliability The probability of spontaneous bit value change under various conditions, or overall failure rate.
Utilities such as hdparm and sar can be used to measure IO performance in Linux.
Energy use
Storage devices that reduce fan usage automatically shut-down during inactivity, and low power hard drives can reduce energy consumption by 90 percent.
2.5-inch hard disk drives often consume less power than larger ones. Low capacity solid-state drives have no moving parts and consume less power than hard disks. Also, memory may use more power than hard disks. Large caches, which are used to avoid hitting the memory wall, may also consume a large amount of power.
Security
Full disk encryption, volume and virtual disk encryption, andor file/folder encryption is readily available for most storage devices.
Hardware memory encryption is available in Intel Architecture, supporting Total Memory Encryption (TME) and page granular memory encryption with multiple keys (MKTME). and in SPARC M7 generation since October 2015.
Vulnerability and reliability
Distinct types of data storage have different points of failure and various methods of predictive failure analysis.
Vulnerabilities that can instantly lead to total loss are head crashing on mechanical hard drives and failure of electronic components on flash storage.
Error detection
Impending failure on hard disk drives is estimable using S.M.A.R.T. diagnostic data that includes the hours of operation and the count of spin-ups, though its reliability is disputed.
Flash storage may experience downspiking transfer rates as a result of accumulating errors, which the flash memory controller attempts to correct.
The health of optical media can be determined by measuring correctable minor errors, of which high counts signify deteriorating and/or low-quality media. Too many consecutive minor errors can lead to data corruption. Not all vendors and models of optical drives support error scanning.
Storage media
, the most commonly used data storage media are semiconductor, magnetic, and optical, while paper still sees some limited usage. Some other fundamental storage technologies, such as all-flash arrays (AFAs) are proposed for development.
Semiconductor
Semiconductor memory uses semiconductor-based integrated circuit (IC) chips to store information. Data are typically stored in metal–oxide–semiconductor (MOS) memory cells. A semiconductor memory chip may contain millions of memory cells, consisting of tiny MOS field-effect transistors (MOSFETs) and/or MOS capacitors. Both volatile and non-volatile forms of semiconductor memory exist, the former using standard MOSFETs and the latter using floating-gate MOSFETs.
In modern computers, primary storage almost exclusively consists of dynamic volatile semiconductor random-access memory (RAM), particularly dynamic random-access memory (DRAM). Since the turn of the century, a type of non-volatile floating-gate semiconductor memory known as flash memory has steadily gained share as off-line storage for home computers. Non-volatile semiconductor memory is also used for secondary storage in various advanced electronic devices and specialized computers that are designed for them.
As early as 2006, notebook and desktop computer manufacturers started using flash-based solid-state drives (SSDs) as default configuration options for the secondary storage either in addition to or instead of the more traditional HDD.
Magnetic
Magnetic storage uses different patterns of magnetization on a magnetically coated surface to store information. Magnetic storage is non-volatile. The information is accessed using one or more read/write heads which may contain one or more recording transducers. A read/write head only covers a part of the surface so that the head or medium or both must be moved relative to another in order to access data. In modern computers, magnetic storage will take these forms:
Magnetic disk;
Floppy disk, used for off-line storage;
Hard disk drive, used for secondary storage.
Magnetic tape, used for tertiary and off-line storage;
Carousel memory (magnetic rolls).
In early computers, magnetic storage was also used as:
Primary storage in a form of magnetic memory, or core memory, core rope memory, thin-film memory and/or twistor memory;
Tertiary (e.g. NCR CRAM) or off line storage in the form of magnetic cards;
Magnetic tape was then often used for secondary storage.
Magnetic storage does not have a definite limit of rewriting cycles like flash storage and re-writeable optical media, as altering magnetic fields causes no physical wear. Rather, their life span is limited by mechanical parts.
Optical
Optical storage, the typical optical disc, stores information in deformities on the surface of a circular disc and reads this information by illuminating the surface with a laser diode and observing the reflection. Optical disc storage is non-volatile. The deformities may be permanent (read only media), formed once (write once media) or reversible (recordable or read/write media). The following forms are currently in common use:
CD, CD-ROM, DVD, BD-ROM: Read only storage, used for mass distribution of digital information (music, video, computer programs);
CD-R, DVD-R, DVD+R, BD-R: Write once storage, used for tertiary and off-line storage;
CD-RW, DVD-RW, DVD+RW, DVD-RAM, BD-RE: Slow write, fast read storage, used for tertiary and off-line storage;
Ultra Density Optical or UDO is similar in capacity to BD-R or BD-RE and is slow write, fast read storage used for tertiary and off-line storage.
Magneto-optical disc storage is optical disc storage where the magnetic state on a ferromagnetic surface stores information. The information is read optically and written by combining magnetic and optical methods. Magneto-optical disc storage is non-volatile, sequential access, slow write, fast read storage used for tertiary and off-line storage.
3D optical data storage has also been proposed.
Light induced magnetization melting in magnetic photoconductors has also been proposed for high-speed low-energy consumption magneto-optical storage.
Paper
Paper data storage, typically in the form of paper tape or punched cards, has long been used to store information for automatic processing, particularly before general-purpose computers existed. Information was recorded by punching holes into the paper or cardboard medium and was read mechanically (or later optically) to determine whether a particular location on the medium was solid or contained a hole. Barcodes make it possible for objects that are sold or transported to have some computer-readable information securely attached.
Relatively small amounts of digital data (compared to other digital data storage) may be backed up on paper as a matrix barcode for very long-term storage, as the longevity of paper typically exceeds even magnetic data storage.
Other storage media or substrates
Vacuum-tube memory A Williams tube used a cathode-ray tube, and a Selectron tube used a large vacuum tube to store information. These primary storage devices were short-lived in the market, since the Williams tube was unreliable, and the Selectron tube was expensive.
Electro-acoustic memory Delay-line memory used sound waves in a substance such as mercury to store information. Delay-line memory was dynamic volatile, cycle sequential read/write storage, and was used for primary storage.
Optical tape is a medium for optical storage, generally consisting of a long and narrow strip of plastic, onto which patterns can be written and from which the patterns can be read back. It shares some technologies with cinema film stock and optical discs, but is compatible with neither. The motivation behind developing this technology was the possibility of far greater storage capacities than either magnetic tape or optical discs.
Phase-change memory uses different mechanical phases of phase-change material to store information in an X–Y addressable matrix and reads the information by observing the varying electrical resistance of the material. Phase-change memory would be non-volatile, random-access read/write storage, and might be used for primary, secondary and off-line storage. Most rewritable and many write-once optical disks already use phase-change material to store information.
Holographic data storage stores information optically inside crystals or photopolymers. Holographic storage can utilize the whole volume of the storage medium, unlike optical disc storage, which is limited to a small number of surface layers. Holographic storage would be non-volatile, sequential-access, and either write-once or read/write storage. It might be used for secondary and off-line storage. See Holographic Versatile Disc (HVD).
Molecular memory stores information in polymer that can store electric charge. Molecular memory might be especially suited for primary storage. The theoretical storage capacity of molecular memory is 10 terabits per square inch (16 Gbit/mm2).
Magnetic photoconductors store magnetic information, which can be modified by low-light illumination.
DNA stores information in DNA nucleotides. It was first done in 2012, when researchers achieved a ratio of 1.28 petabytes per gram of DNA. In March 2017 scientists reported that a new algorithm called a DNA fountain achieved 85% of the theoretical limit, at 215 petabytes per gram of DNA.
Related technologies
Redundancy
While a group of bits malfunction may be resolved by error detection and correction mechanisms (see above), storage device malfunction requires different solutions. The following solutions are commonly used and valid for most storage devices:
Device mirroring (replication) – A common solution to the problem is constantly maintaining an identical copy of device content on another device (typically of a same type). The downside is that this doubles the storage, and both devices (copies) need to be updated simultaneously with some overhead and possibly some delays. The upside is possible concurrent read of a same data group by two independent processes, which increases performance. When one of the replicated devices is detected to be defective, the other copy is still operational, and is being utilized to generate a new copy on another device (usually available operational in a pool of stand-by devices for this purpose).
Redundant array of independent disks (RAID) – This method generalizes the device mirroring above by allowing one device in a group of ndevices to fail and be replaced with the content restored (Device mirroring is RAID with n=2). RAID groups of n=5 or n=6 are common. n>2 saves storage, when comparing with n=2, at the cost of more processing during both regular operation (with often reduced performance) and defective device replacement.
Device mirroring and typical RAID are designed to handle a single device failure in the RAID group of devices. However, if a second failure occurs before the RAID group is completely repaired from the first failure, then data can be lost. The probability of a single failure is typically small. Thus the probability of two failures in a same RAID group in time proximity is much smaller (approximately the probability squared, i.e., multiplied by itself). If a database cannot tolerate even such smaller probability of data loss, then the RAID group itself is replicated (mirrored). In many cases such mirroring is done geographically remotely, in a different storage array, to handle also recovery from disasters (see disaster recovery above).
Network connectivity
A secondary or tertiary storage may connect to a computer utilizing computer networks. This concept does not pertain to the primary storage, which is shared between multiple processors to a lesser degree.
Direct-attached storage (DAS) is a traditional mass storage, that does not use any network. This is still a most popular approach. This retronym was coined recently, together with NAS and SAN.
Network-attached storage (NAS) is mass storage attached to a computer which another computer can access at file level over a local area network, a private wide area network, or in the case of online file storage, over the Internet. NAS is commonly associated with the NFS and CIFS/SMB protocols.
Storage area network (SAN) is a specialized network, that provides other computers with storage capacity. The crucial difference between NAS and SAN, is that NAS presents and manages file systems to client computers, while SAN provides access at block-addressing (raw) level, leaving it to attaching systems to manage data or file systems within the provided capacity. SAN is commonly associated with Fibre Channel networks.
Robotic storage
Large quantities of individual magnetic tapes, and optical or magneto-optical discs may be stored in robotic tertiary storage devices. In tape storage field they are known as tape libraries, and in optical storage field optical jukeboxes, or optical disk libraries per analogy. The smallest forms of either technology containing just one drive device are referred to as autoloaders or autochangers.
Robotic-access storage devices may have a number of slots, each holding individual media, and usually one or more picking robots that traverse the slots and load media to built-in drives. The arrangement of the slots and picking devices affects performance. Important characteristics of such storage are possible expansion options: adding slots, modules, drives, robots. Tape libraries may have from 10 to more than 100,000 slots, and provide terabytes or petabytes of near-line information. Optical jukeboxes are somewhat smaller solutions, up to 1,000 slots.
Robotic storage is used for backups, and for high-capacity archives in imaging, medical, and video industries. Hierarchical storage management is a most known archiving strategy of automatically migrating long-unused files from fast hard disk storage to libraries or jukeboxes. If the files are needed, they are retrieved back to disk.
See also
Primary storage topics
Aperture (computer memory)
Dynamic random-access memory (DRAM)
Memory latency
Mass storage
Memory cell (disambiguation)
Memory management
Memory leak
Virtual memory
Memory protection
Page address register
Stable storage
Static random-access memory (SRAM)
Secondary, tertiary and off-line storage topics
Cloud storage
Data deduplication
Data proliferation
Data storage tag used for capturing research data
Disk utility
File system
List of file formats
Flash memory
Geoplexing
Information repository
Noise-predictive maximum-likelihood detection
Object(-based) storage
Removable media
Solid-state drive
Spindle
Virtual tape library
Wait state
Write buffer
Write protection
Data storage conferences
Storage Networking World
Storage World Conference
References
Further reading
Memory & storage, Computer history museum
Computer architecture |
5323 | https://en.wikipedia.org/wiki/Computer%20science | Computer science | Fundamental areas of computer science include the study of computer programming languages (top left), the design and analysis of algorithms (top right), building intelligent systems (bottom left), and electrical hardware (bottom right).
Computer science involves the study of or the practice of computation, automation, and information. Computer science spans theoretical disciplines (such as algorithms, theory of computation, and information theory) to practical disciplines (including the design and implementation of hardware and software). Computer science is generally considered an area of academic research and distinct from computer programming.
Algorithms and data structures are central to computer science.
The theory of computation concerns abstract models of computation and general classes of problems that can be solved using them. The fields of cryptography and computer security involve studying the means for secure communication and for preventing security vulnerabilities. Computer graphics and computational geometry address the generation of images. Programming language theory considers approaches to the description of computational processes, and database theory concerns the management of repositories of data. Human–computer interaction investigates the interfaces through which humans and computers interact, and software engineering focuses on the design and principles behind developing software. Areas such as operating systems, networks and embedded systems investigate the principles and design behind complex systems. Computer architecture describes the construction of computer components and computer-operated equipment. Artificial intelligence and machine learning aim to synthesize goal-orientated processes such as problem-solving, decision-making, environmental adaptation, planning and learning found in humans and animals. Within artificial intelligence, computer vision aims to understand and process image and video data, while natural-language processing aims to understand and process textual and linguistic data.
The fundamental concern of computer science is determining what can and cannot be automated. The Turing Award is generally recognized as the highest distinction in computer science.
History
The earliest foundations of what would become computer science predate the invention of the modern digital computer. Machines for calculating fixed numerical tasks such as the abacus have existed since antiquity, aiding in computations such as multiplication and division. Algorithms for performing computations have existed since antiquity, even before the development of sophisticated computing equipment.
Wilhelm Schickard designed and constructed the first working mechanical calculator in 1623. In 1673, Gottfried Leibniz demonstrated a digital mechanical calculator, called the Stepped Reckoner. Leibniz may be considered the first computer scientist and information theorist, for, among other reasons, documenting the binary number system. In 1820, Thomas de Colmar launched the mechanical calculator industry when he invented his simplified arithmometer, the first calculating machine strong enough and reliable enough to be used daily in an office environment. Charles Babbage started the design of the first automatic mechanical calculator, his Difference Engine, in 1822, which eventually gave him the idea of the first programmable mechanical calculator, his Analytical Engine. He started developing this machine in 1834, and "in less than two years, he had sketched out many of the salient features of the modern computer". "A crucial step was the adoption of a punched card system derived from the Jacquard loom" making it infinitely programmable. In 1843, during the translation of a French article on the Analytical Engine, Ada Lovelace wrote, in one of the many notes she included, an algorithm to compute the Bernoulli numbers, which is considered to be the first published algorithm ever specifically tailored for implementation on a computer. Around 1885, Herman Hollerith invented the tabulator, which used punched cards to process statistical information; eventually his company became part of IBM. Following Babbage, although unaware of his earlier work, Percy Ludgate in 1909 published the 2nd of the only two designs for mechanical analytical engines in history. In 1937, one hundred years after Babbage's impossible dream, Howard Aiken convinced IBM, which was making all kinds of punched card equipment and was also in the calculator business to develop his giant programmable calculator, the ASCC/Harvard Mark I, based on Babbage's Analytical Engine, which itself used cards and a central computing unit. When the machine was finished, some hailed it as "Babbage's dream come true".
During the 1940s, with the development of new and more powerful computing machines such as the Atanasoff–Berry computer and ENIAC, the term computer came to refer to the machines rather than their human predecessors. As it became clear that computers could be used for more than just mathematical calculations, the field of computer science broadened to study computation in general. In 1945, IBM founded the Watson Scientific Computing Laboratory at Columbia University in New York City. The renovated fraternity house on Manhattan's West Side was IBM's first laboratory devoted to pure science. The lab is the forerunner of IBM's Research Division, which today operates research facilities around the world. Ultimately, the close relationship between IBM and the university was instrumental in the emergence of a new scientific discipline, with Columbia offering one of the first academic-credit courses in computer science in 1946. Computer science began to be established as a distinct academic discipline in the 1950s and early 1960s. The world's first computer science degree program, the Cambridge Diploma in Computer Science, began at the University of Cambridge Computer Laboratory in 1953. The first computer science department in the United States was formed at Purdue University in 1962. Since practical computers became available, many applications of computing have become distinct areas of study in their own rights.
Etymology
Although first proposed in 1956, the term "computer science" appears in a 1959 article in Communications of the ACM,
in which Louis Fein argues for the creation of a Graduate School in Computer Sciences analogous to the creation of Harvard Business School in 1921, justifying the name by arguing that, like management science, the subject is applied and interdisciplinary in nature, while having the characteristics typical of an academic discipline.
His efforts, and those of others such as numerical analyst George Forsythe, were rewarded: universities went on to create such departments, starting with Purdue in 1962. Despite its name, a significant amount of computer science does not involve the study of computers themselves. Because of this, several alternative names have been proposed. Certain departments of major universities prefer the term computing science, to emphasize precisely that difference. Danish scientist Peter Naur suggested the term datalogy, to reflect the fact that the scientific discipline revolves around data and data treatment, while not necessarily involving computers. The first scientific institution to use the term was the Department of Datalogy at the University of Copenhagen, founded in 1969, with Peter Naur being the first professor in datalogy. The term is used mainly in the Scandinavian countries. An alternative term, also proposed by Naur, is data science; this is now used for a multi-disciplinary field of data analysis, including statistics and databases.
In the early days of computing, a number of terms for the practitioners of the field of computing were suggested in the Communications of the ACM—turingineer, turologist, flow-charts-man, applied meta-mathematician, and applied epistemologist. Three months later in the same journal, comptologist was suggested, followed next year by hypologist. The term computics has also been suggested. In Europe, terms derived from contracted translations of the expression "automatic information" (e.g. "informazione automatica" in Italian) or "information and mathematics" are often used, e.g. informatique (French), Informatik (German), informatica (Italian, Dutch), informática (Spanish, Portuguese), informatika (Slavic languages and Hungarian) or pliroforiki (πληροφορική, which means informatics) in Greek. Similar words have also been adopted in the UK (as in the School of Informatics, University of Edinburgh). "In the U.S., however, informatics is linked with applied computing, or computing in the context of another domain."
A folkloric quotation, often attributed to—but almost certainly not first formulated by—Edsger Dijkstra, states that "computer science is no more about computers than astronomy is about telescopes." The design and deployment of computers and computer systems is generally considered the province of disciplines other than computer science. For example, the study of computer hardware is usually considered part of computer engineering, while the study of commercial computer systems and their deployment is often called information technology or information systems. However, there has been much cross-fertilization of ideas between the various computer-related disciplines. Computer science research also often intersects other disciplines, such as cognitive science, linguistics, mathematics, physics, biology, Earth science, statistics, philosophy, and logic.
Computer science is considered by some to have a much closer relationship with mathematics than many scientific disciplines, with some observers saying that computing is a mathematical science. Early computer science was strongly influenced by the work of mathematicians such as Kurt Gödel, Alan Turing, John von Neumann, Rózsa Péter and Alonzo Church and there continues to be a useful interchange of ideas between the two fields in areas such as mathematical logic, category theory, domain theory, and algebra.
The relationship between Computer Science and Software Engineering is a contentious issue, which is further muddied by disputes over what the term "Software Engineering" means, and how computer science is defined. David Parnas, taking a cue from the relationship between other engineering and science disciplines, has claimed that the principal focus of computer science is studying the properties of computation in general, while the principal focus of software engineering is the design of specific computations to achieve practical goals, making the two separate but complementary disciplines.
The academic, political, and funding aspects of computer science tend to depend on whether a department is formed with a mathematical emphasis or with an engineering emphasis. Computer science departments with a mathematics emphasis and with a numerical orientation consider alignment with computational science. Both types of departments tend to make efforts to bridge the field educationally if not across all research.
Philosophy
Epistemology of computer science
Despite the word "science" in its name, there is debate over whether or not computer science is a discipline of science, mathematics, or engineering. Allen Newell and Herbert A. Simon argued in 1975, It has since been argued that computer science can be classified as an empirical science since it makes use of empirical testing to evaluate the correctness of programs, but a problem remains in defining the laws and theorems of computer science (if any exist) and defining the nature of experiments in computer science. Proponents of classifying computer science as an engineering discipline argue that the reliability of computational systems is investigated in the same way as bridges in civil engineering and airplanes in aerospace engineering. They also argue that while empirical sciences observe what presently exists, computer science observes what is possible to exist and while scientists discover laws from observation, no proper laws have been found in computer science and it is instead concerned with creating phenomena.
Proponents of classifying computer science as a mathematical discipline argue that computer programs are physical realizations of mathematical entities and programs can be deductively reasoned through mathematical formal methods. Computer scientists Edsger W. Dijkstra and Tony Hoare regard instructions for computer programs as mathematical sentences and interpret formal semantics for programming languages as mathematical axiomatic systems.
Paradigms of computer science
A number of computer scientists have argued for the distinction of three separate paradigms in computer science. Peter Wegner argued that those paradigms are science, technology, and mathematics. Peter Denning's working group argued that they are theory, abstraction (modeling), and design. Amnon H. Eden described them as the "rationalist paradigm" (which treats computer science as a branch of mathematics, which is prevalent in theoretical computer science, and mainly employs deductive reasoning), the "technocratic paradigm" (which might be found in engineering approaches, most prominently in software engineering), and the "scientific paradigm" (which approaches computer-related artifacts from the empirical perspective of natural sciences, identifiable in some branches of artificial intelligence).
Computer science focuses on methods involved in design, specification, programming, verification, implementation and testing of human-made computing systems.
Fields
As a discipline, computer science spans a range of topics from theoretical studies of algorithms and the limits of computation to the practical issues of implementing computing systems in hardware and software.
CSAB, formerly called Computing Sciences Accreditation Board—which is made up of representatives of the Association for Computing Machinery (ACM), and the IEEE Computer Society (IEEE CS)—identifies four areas that it considers crucial to the discipline of computer science: theory of computation, algorithms and data structures, programming methodology and languages, and computer elements and architecture. In addition to these four areas, CSAB also identifies fields such as software engineering, artificial intelligence, computer networking and communication, database systems, parallel computation, distributed computation, human–computer interaction, computer graphics, operating systems, and numerical and symbolic computation as being important areas of computer science.
Theoretical computer science
Theoretical Computer Science is mathematical and abstract in spirit, but it derives its motivation from the practical and everyday computation. Its aim is to understand the nature of computation and, as a consequence of this understanding, provide more efficient methodologies.
Theory of computation
According to Peter Denning, the fundamental question underlying computer science is, "What can be automated?" Theory of computation is focused on answering fundamental questions about what can be computed and what amount of resources are required to perform those computations. In an effort to answer the first question, computability theory examines which computational problems are solvable on various theoretical models of computation. The second question is addressed by computational complexity theory, which studies the time and space costs associated with different approaches to solving a multitude of computational problems.
The famous P = NP? problem, one of the Millennium Prize Problems, is an open problem in the theory of computation.
Information and coding theory
Information theory, closely related to probability and statistics, is related to the quantification of information. This was developed by Claude Shannon to find fundamental limits on signal processing operations such as compressing data and on reliably storing and communicating data.
Coding theory is the study of the properties of codes (systems for converting information from one form to another) and their fitness for a specific application. Codes are used for data compression, cryptography, error detection and correction, and more recently also for network coding. Codes are studied for the purpose of designing efficient and reliable data transmission methods.
Data structures and algorithms
Data structures and algorithms are the studies of commonly used computational methods and their computational efficiency.
Programming language theory and formal methods
Programming language theory is a branch of computer science that deals with the design, implementation, analysis, characterization, and classification of programming languages and their individual features. It falls within the discipline of computer science, both depending on and affecting mathematics, software engineering, and linguistics. It is an active research area, with numerous dedicated academic journals.
Formal methods are a particular kind of mathematically based technique for the specification, development and verification of software and hardware systems. The use of formal methods for software and hardware design is motivated by the expectation that, as in other engineering disciplines, performing appropriate mathematical analysis can contribute to the reliability and robustness of a design. They form an important theoretical underpinning for software engineering, especially where safety or security is involved. Formal methods are a useful adjunct to software testing since they help avoid errors and can also give a framework for testing. For industrial use, tool support is required. However, the high cost of using formal methods means that they are usually only used in the development of high-integrity and life-critical systems, where safety or security is of utmost importance. Formal methods are best described as the application of a fairly broad variety of theoretical computer science fundamentals, in particular logic calculi, formal languages, automata theory, and program semantics, but also type systems and algebraic data types to problems in software and hardware specification and verification.
Computer systems and computational processes
Artificial intelligence
Artificial intelligence (AI) aims to or is required to synthesize goal-orientated processes such as problem-solving, decision-making, environmental adaptation, learning, and communication found in humans and animals. From its origins in cybernetics and in the Dartmouth Conference (1956), artificial intelligence research has been necessarily cross-disciplinary, drawing on areas of expertise such as applied mathematics, symbolic logic, semiotics, electrical engineering, philosophy of mind, neurophysiology, and social intelligence. AI is associated in the popular mind with robotic development, but the main field of practical application has been as an embedded component in areas of software development, which require computational understanding. The starting point in the late 1940s was Alan Turing's question "Can computers think?", and the question remains effectively unanswered, although the Turing test is still used to assess computer output on the scale of human intelligence. But the automation of evaluative and predictive tasks has been increasingly successful as a substitute for human monitoring and intervention in domains of computer application involving complex real-world data.
Computer architecture and organization
Computer architecture, or digital computer organization, is the conceptual design and fundamental operational structure of a computer system. It focuses largely on the way by which the central processing unit performs internally and accesses addresses in memory. Computer engineers study computational logic and design of computer hardware, from individual processor components, microcontrollers, personal computers to supercomputers and embedded systems. The term “architecture” in computer literature can be traced to the work of Lyle R. Johnson and Frederick P. Brooks, Jr., members of the Machine Organization department in IBM's main research center in 1959.
Concurrent, parallel and distributed computing
Concurrency is a property of systems in which several computations are executing simultaneously, and potentially interacting with each other. A number of mathematical models have been developed for general concurrent computation including Petri nets, process calculi and the Parallel Random Access Machine model. When multiple computers are connected in a network while using concurrency, this is known as a distributed system. Computers within that distributed system have their own private memory, and information can be exchanged to achieve common goals.
Computer networks
This branch of computer science aims to manage networks between computers worldwide.
Computer security and cryptography
Computer security is a branch of computer technology with the objective of protecting information from unauthorized access, disruption, or modification while maintaining the accessibility and usability of the system for its intended users.
Historical cryptography is the art of writing and deciphering secret messages. Modern cryptography is the scientific study of problems relating to distributed computations that can be attacked. Technologies studied in modern cryptography include symmetric and asymmetric encryption, digital signatures, cryptographic hash functions, key-agreement protocols, blockchain, zero-knowledge proofs, and garbled circuits.
Databases and data mining
A database is intended to organize, store, and retrieve large amounts of data easily. Digital databases are managed using database management systems to store, create, maintain, and search data, through database models and query languages. Data mining is a process of discovering patterns in large data sets.
Computer graphics and visualization
Computer graphics is the study of digital visual contents and involves the synthesis and manipulation of image data. The study is connected to many other fields in computer science, including computer vision, image processing, and computational geometry, and is heavily applied in the fields of special effects and video games.
Image and sound processing
Information can take the form of images, sound, video or other multimedia. Bits of information can be streamed via signals. Its processing is the central notion of informatics, the European view on computing, which studies information processing algorithms independently of the type of information carrier - whether it is electrical, mechanical or biological. This field plays important role in information theory, telecommunications, information engineering and has applications in medical image computing and speech synthesis, among others. What is the lower bound on the complexity of fast Fourier transform algorithms? is one of unsolved problems in theoretical computer science.
Applied computer science
Computational science, finance and engineering
Scientific computing (or computational science) is the field of study concerned with constructing mathematical models and quantitative analysis techniques and using computers to analyze and solve scientific problems. A major usage of scientific computing is simulation of various processes, including computational fluid dynamics, physical, electrical, and electronic systems and circuits, as well as societies and social situations (notably war games) along with their habitats, among many others. Modern computers enable optimization of such designs as complete aircraft. Notable in electrical and electronic circuit design are SPICE, as well as software for physical realization of new (or modified) designs. The latter includes essential design software for integrated circuits.
Social computing and human–computer interaction
Social computing is an area that is concerned with the intersection of social behavior and computational systems. Human–computer interaction research develops theories, principles, and guidelines for user interface designers.
Software engineering
Software engineering is the study of designing, implementing, and modifying the software in order to ensure it is of high quality, affordable, maintainable, and fast to build. It is a systematic approach to software design, involving the application of engineering practices to software. Software engineering deals with the organizing and analyzing of software—it doesn't just deal with the creation or manufacture of new software, but its internal arrangement and maintenance. For example software testing, systems engineering, technical debt and software development processes.
Discoveries
The philosopher of computing Bill Rapaport noted three Great Insights of Computer Science:
Gottfried Wilhelm Leibniz's, George Boole's, Alan Turing's, Claude Shannon's, and Samuel Morse's insight: there are only two objects that a computer has to deal with in order to represent "anything".
All the information about any computable problem can be represented using only 0 and 1 (or any other bistable pair that can flip-flop between two easily distinguishable states, such as "on/off", "magnetized/de-magnetized", "high-voltage/low-voltage", etc.).
Alan Turing's insight: there are only five actions that a computer has to perform in order to do "anything".
Every algorithm can be expressed in a language for a computer consisting of only five basic instructions:
move left one location;
move right one location;
read symbol at current location;
print 0 at current location;
print 1 at current location.
Corrado Böhm and Giuseppe Jacopini's insight: there are only three ways of combining these actions (into more complex ones) that are needed in order for a computer to do "anything".
Only three rules are needed to combine any set of basic instructions into more complex ones:
sequence: first do this, then do that;
selection: IF such-and-such is the case, THEN do this, ELSE do that;
repetition: WHILE such-and-such is the case, DO this.
Note that the three rules of Boehm's and Jacopini's insight can be further simplified with the use of goto (which means it is more elementary than structured programming).
Programming paradigms
Programming languages can be used to accomplish different tasks in different ways. Common programming paradigms include:
Functional programming, a style of building the structure and elements of computer programs that treats computation as the evaluation of mathematical functions and avoids state and mutable data. It is a declarative programming paradigm, which means programming is done with expressions or declarations instead of statements.
Imperative programming, a programming paradigm that uses statements that change a program's state. In much the same way that the imperative mood in natural languages expresses commands, an imperative program consists of commands for the computer to perform. Imperative programming focuses on describing how a program operates.
Object-oriented programming, a programming paradigm based on the concept of "objects", which may contain data, in the form of fields, often known as attributes; and code, in the form of procedures, often known as methods. A feature of objects is that an object's procedures can access and often modify the data fields of the object with which they are associated. Thus object-oriented computer programs are made out of objects that interact with one another.
Service-oriented programming, a programming paradigm that uses "services" as the unit of computer work, to design and implement integrated business applications and mission critical software programs
Many languages offer support for multiple paradigms, making the distinction more a matter of style than of technical capabilities.
Academia
Conferences are important events for computer science research. During these conferences, researchers from the public and private sectors present their recent work and meet. Unlike in most other academic fields, in computer science, the prestige of conference papers is greater than that of journal publications. One proposed explanation for this is the quick development of this relatively new field requires rapid review and distribution of results, a task better handled by conferences than by journals.
Education
Computer Science, known by its near synonyms, Computing, Computer Studies, has been taught in UK schools since the days of batch processing, mark sensitive cards and paper tape but usually to a select few students. In 1981, the BBC produced a micro-computer and classroom network and Computer Studies became common for GCE O level students (11–16-year-old), and Computer Science to A level students. Its importance was recognised, and it became a compulsory part of the National Curriculum, for Key Stage 3 & 4. In September 2014 it became an entitlement for all pupils over the age of 4.
In the US, with 14,000 school districts deciding the curriculum, provision was fractured. According to a 2010 report by the Association for Computing Machinery (ACM) and Computer Science Teachers Association (CSTA), only 14 out of 50 states have adopted significant education standards for high school computer science.
Israel, New Zealand, and South Korea have included computer science in their national secondary education curricula, and several others are following.
See also
Computer engineering
Computer programming
Digital Revolution
Information and communications technology
Information technology
List of computer scientists
List of computer science awards
List of important publications in computer science
List of pioneers in computer science
List of unsolved problems in computer science
Programming language
Software engineering
Notes
References
Further reading
Overview
"Within more than 70 chapters, every one new or significantly revised, one can find any kind of information and references about computer science one can imagine. […] all in all, there is absolute nothing about Computer Science that can not be found in the 2.5 kilogram-encyclopaedia with its 110 survey articles […]." (Christoph Meinel, Zentralblatt MATH)
"[…] this set is the most unique and possibly the most useful to the [theoretical computer science] community, in support both of teaching and research […]. The books can be used by anyone wanting simply to gain an understanding of one of these areas, or by someone desiring to be in research in a topic, or by instructors wishing to find timely information on a subject they are teaching outside their major areas of expertise." (Rocky Ross, SIGACT News)
"Since 1976, this has been the definitive reference work on computer, computing, and computer science. […] Alphabetically arranged and classified into broad subject areas, the entries cover hardware, computer systems, information and data, software, the mathematics of computing, theory of computation, methodologies, applications, and computing milieu. The editors have done a commendable job of blending historical perspective and practical reference information. The encyclopedia remains essential for most public and academic library reference collections." (Joe Accardin, Northeastern Illinois Univ., Chicago)
Selected literature
"Covering a period from 1966 to 1993, its interest lies not only in the content of each of these papers – still timely today – but also in their being put together so that ideas expressed at different times complement each other nicely." (N. Bernard, Zentralblatt MATH)
Articles
Peter J. Denning. Is computer science science?, Communications of the ACM, April 2005.
Peter J. Denning, Great principles in computing curricula, Technical Symposium on Computer Science Education, 2004.
Research evaluation for computer science, Informatics Europe report . Shorter journal version: Bertrand Meyer, Christine Choppy, Jan van Leeuwen and Jorgen Staunstrup, Research evaluation for computer science, in Communications of the ACM, vol. 52, no. 4, pp. 31–34, April 2009.
Curriculum and classification
Association for Computing Machinery. 1998 ACM Computing Classification System. 1998.
Joint Task Force of Association for Computing Machinery (ACM), Association for Information Systems (AIS) and IEEE Computer Society (IEEE CS). Computing Curricula 2005: The Overview Report. September 30, 2005.
Norman Gibbs, Allen Tucker. "A model curriculum for a liberal arts degree in computer science". Communications of the ACM, Volume 29 Issue 3, March 1986.
External links
Scholarly Societies in Computer Science
What is Computer Science?
Best Papers Awards in Computer Science since 1996
Photographs of computer scientists by Bertrand Meyer
EECS.berkeley.edu
Bibliography and academic search engines
CiteSeerx (article): search engine, digital library and repository for scientific and academic papers with a focus on computer and information science.
DBLP Computer Science Bibliography (article): computer science bibliography website hosted at Universität Trier, in Germany.
The Collection of Computer Science Bibliographies (Collection of Computer Science Bibliographies)
Professional organizations
Association for Computing Machinery
IEEE Computer Society
Informatics Europe
AAAI
AAAS Computer Science
Misc
Computer Science—Stack Exchange: a community-run question-and-answer site for computer science
What is computer science
Is computer science science?
Computer Science (Software) Must be Considered as an Independent Discipline. |
5715 | https://en.wikipedia.org/wiki/Cryptanalysis | Cryptanalysis | Cryptanalysis (from the Greek kryptós, "hidden", and analýein, "to analyze") refers to the process of analyzing information systems in order to understand hidden aspects of the systems. Cryptanalysis is used to breach cryptographic security systems and gain access to the contents of encrypted messages, even if the cryptographic key is unknown.
In addition to mathematical analysis of cryptographic algorithms, cryptanalysis includes the study of side-channel attacks that do not target weaknesses in the cryptographic algorithms themselves, but instead exploit weaknesses in their implementation.
Even though the goal has been the same, the methods and techniques of cryptanalysis have changed drastically through the history of cryptography, adapting to increasing cryptographic complexity, ranging from the pen-and-paper methods of the past, through machines like the British Bombes and Colossus computers at Bletchley Park in World War II, to the mathematically advanced computerized schemes of the present. Methods for breaking modern cryptosystems often involve solving carefully constructed problems in pure mathematics, the best-known being integer factorization.
Overview
Given some encrypted data ("ciphertext"), the goal of the cryptanalyst is to gain as much information as possible about the original, unencrypted data ("plaintext"). Cryptographic attacks can be characterized in a number of ways:
Amount of information available to the attacker
Attacks can be classified based on what type of information the attacker has available. As a basic starting point it is normally assumed that, for the purposes of analysis, the general algorithm is known; this is Shannon's Maxim "the enemy knows the system" – in its turn, equivalent to Kerckhoffs' principle. This is a reasonable assumption in practice – throughout history, there are countless examples of secret algorithms falling into wider knowledge, variously through espionage, betrayal and reverse engineering. (And on occasion, ciphers have been broken through pure deduction; for example, the German Lorenz cipher and the Japanese Purple code, and a variety of classical schemes):
Ciphertext-only: the cryptanalyst has access only to a collection of ciphertexts or codetexts.
Known-plaintext: the attacker has a set of ciphertexts to which they know the corresponding plaintext.
Chosen-plaintext (chosen-ciphertext): the attacker can obtain the ciphertexts (plaintexts) corresponding to an arbitrary set of plaintexts (ciphertexts) of their own choosing.
Adaptive chosen-plaintext: like a chosen-plaintext attack, except the attacker can choose subsequent plaintexts based on information learned from previous encryptions, similarly to the Adaptive chosen ciphertext attack.
Related-key attack: Like a chosen-plaintext attack, except the attacker can obtain ciphertexts encrypted under two different keys. The keys are unknown, but the relationship between them is known; for example, two keys that differ in the one bit.
Computational resources required
Attacks can also be characterised by the resources they require. Those resources include:
Time – the number of computation steps (e.g., test encryptions) which must be performed.
Memory – the amount of storage required to perform the attack.
Data – the quantity and type of plaintexts and ciphertexts required for a particular approach.
It's sometimes difficult to predict these quantities precisely, especially when the attack isn't practical to actually implement for testing. But academic cryptanalysts tend to provide at least the estimated order of magnitude of their attacks' difficulty, saying, for example, "SHA-1 collisions now 252."
Bruce Schneier notes that even computationally impractical attacks can be considered breaks: "Breaking a cipher simply means finding a weakness in the cipher that can be exploited with a complexity less than brute force. Never mind that brute-force might require 2128 encryptions; an attack requiring 2110 encryptions would be considered a break...simply put, a break can just be a certificational weakness: evidence that the cipher does not perform as advertised."
Partial breaks
The results of cryptanalysis can also vary in usefulness. Cryptographer Lars Knudsen (1998) classified various types of attack on block ciphers according to the amount and quality of secret information that was discovered:
Total break – the attacker deduces the secret key.
Global deduction – the attacker discovers a functionally equivalent algorithm for encryption and decryption, but without learning the key.
Instance (local) deduction – the attacker discovers additional plaintexts (or ciphertexts) not previously known.
Information deduction – the attacker gains some Shannon information about plaintexts (or ciphertexts) not previously known.
Distinguishing algorithm – the attacker can distinguish the cipher from a random permutation.
Academic attacks are often against weakened versions of a cryptosystem, such as a block cipher or hash function with some rounds removed. Many, but not all, attacks become exponentially more difficult to execute as rounds are added to a cryptosystem, so it's possible for the full cryptosystem to be strong even though reduced-round variants are weak. Nonetheless, partial breaks that come close to breaking the original cryptosystem may mean that a full break will follow; the successful attacks on DES, MD5, and SHA-1 were all preceded by attacks on weakened versions.
In academic cryptography, a weakness or a break in a scheme is usually defined quite conservatively: it might require impractical amounts of time, memory, or known plaintexts. It also might require the attacker be able to do things many real-world attackers can't: for example, the attacker may need to choose particular plaintexts to be encrypted or even to ask for plaintexts to be encrypted using several keys related to the secret key. Furthermore, it might only reveal a small amount of information, enough to prove the cryptosystem imperfect but too little to be useful to real-world attackers. Finally, an attack might only apply to a weakened version of cryptographic tools, like a reduced-round block cipher, as a step towards breaking the full system.
History
Cryptanalysis has coevolved together with cryptography, and the contest can be traced through the history of cryptography—new ciphers being designed to replace old broken designs, and new cryptanalytic techniques invented to crack the improved schemes. In practice, they are viewed as two sides of the same coin: secure cryptography requires design against possible cryptanalysis.
Classical ciphers
Although the actual word "cryptanalysis" is relatively recent (it was coined by William Friedman in 1920), methods for breaking codes and ciphers are much older. David Kahn notes in The Codebreakers that Arab scholars were the first people to systematically document cryptanalytic methods.
The first known recorded explanation of cryptanalysis was given by Al-Kindi (c. 801–873, also known as "Alkindus" in Europe), a 9th-century Arab polymath, in Risalah fi Istikhraj al-Mu'amma (A Manuscript on Deciphering Cryptographic Messages). This treatise contains the first description of the method of frequency analysis. Al-Kindi is thus regarded as the first codebreaker in history. His breakthrough work was influenced by Al-Khalil (717–786), who wrote the Book of Cryptographic Messages, which contains the first use of permutations and combinations to list all possible Arabic words with and without vowels.
Frequency analysis is the basic tool for breaking most classical ciphers. In natural languages, certain letters of the alphabet appear more often than others; in English, "E" is likely to be the most common letter in any sample of plaintext. Similarly, the digraph "TH" is the most likely pair of letters in English, and so on. Frequency analysis relies on a cipher failing to hide these statistics. For example, in a simple substitution cipher (where each letter is simply replaced with another), the most frequent letter in the ciphertext would be a likely candidate for "E". Frequency analysis of such a cipher is therefore relatively easy, provided that the ciphertext is long enough to give a reasonably representative count of the letters of the alphabet that it contains.
Al-Kindi's invention of the frequency analysis technique for breaking monoalphabetic substitution ciphers was the most significant cryptanalytic advance until World War II. Al-Kindi's Risalah fi Istikhraj al-Mu'amma described the first cryptanalytic techniques, including some for polyalphabetic ciphers, cipher classification, Arabic phonetics and syntax, and most importantly, gave the first descriptions on frequency analysis. He also covered methods of encipherments, cryptanalysis of certain encipherments, and statistical analysis of letters and letter combinations in Arabic. An important contribution of Ibn Adlan (1187–1268) was on sample size for use of frequency analysis.
In Europe, Italian scholar Giambattista della Porta (1535–1615) was the author of a seminal work on cryptanalysis, De Furtivis Literarum Notis.
Successful cryptanalysis has undoubtedly influenced history; the ability to read the presumed-secret thoughts and plans of others can be a decisive advantage. For example, in England in 1587, Mary, Queen of Scots was tried and executed for treason as a result of her involvement in three plots to assassinate Elizabeth I of England. The plans came to light after her coded correspondence with fellow conspirators was deciphered by Thomas Phelippes.
In Europe during the 15th and 16th centuries, the idea of a polyalphabetic substitution cipher was developed, among others by the French diplomat Blaise de Vigenère (1523–96). For some three centuries, the Vigenère cipher, which uses a repeating key to select different encryption alphabets in rotation, was considered to be completely secure (le chiffre indéchiffrable—"the indecipherable cipher"). Nevertheless, Charles Babbage (1791–1871) and later, independently, Friedrich Kasiski (1805–81) succeeded in breaking this cipher. During World War I, inventors in several countries developed rotor cipher machines such as Arthur Scherbius' Enigma, in an attempt to minimise the repetition that had been exploited to break the Vigenère system.
Ciphers from World War I and World War II
In World War I, the breaking of the Zimmermann Telegram was instrumental in bringing the United States into the war. In World War II, the Allies benefitted enormously from their joint success cryptanalysis of the German ciphers – including the Enigma machine and the Lorenz cipher – and Japanese ciphers, particularly 'Purple' and JN-25. 'Ultra' intelligence has been credited with everything between shortening the end of the European war by up to two years, to determining the eventual result. The war in the Pacific was similarly helped by 'Magic' intelligence.
Cryptanalysis of enemy messages played a significant part in the Allied victory in World War II. F. W. Winterbotham, quoted the western Supreme Allied Commander, Dwight D. Eisenhower, at the war's end as describing Ultra intelligence as having been "decisive" to Allied victory. Sir Harry Hinsley, official historian of British Intelligence in World War II, made a similar assessment about Ultra, saying that it shortened the war "by not less than two years and probably by four years"; moreover, he said that in the absence of Ultra, it is uncertain how the war would have ended.
In practice, frequency analysis relies as much on linguistic knowledge as it does on statistics, but as ciphers became more complex, mathematics became more important in cryptanalysis. This change was particularly evident before and during World War II, where efforts to crack Axis ciphers required new levels of mathematical sophistication. Moreover, automation was first applied to cryptanalysis in that era with the Polish Bomba device, the British Bombe, the use of punched card equipment, and in the Colossus computers – the first electronic digital computers to be controlled by a program.
Indicator
With reciprocal machine ciphers such as the Lorenz cipher and the Enigma machine used by Nazi Germany during World War II, each message had its own key. Usually, the transmitting operator informed the receiving operator of this message key by transmitting some plaintext and/or ciphertext before the enciphered message. This is termed the indicator, as it indicates to the receiving operator how to set his machine to decipher the message.
Poorly designed and implemented indicator systems allowed first Polish cryptographers and then the British cryptographers at Bletchley Park to break the Enigma cipher system. Similar poor indicator systems allowed the British to identify depths that led to the diagnosis of the Lorenz SZ40/42 cipher system, and the comprehensive breaking of its messages without the cryptanalysts seeing the cipher machine.
Depth
Sending two or more messages with the same key is an insecure process. To a cryptanalyst the messages are then said to be "in depth." This may be detected by the messages having the same indicator by which the sending operator informs the receiving operator about the key generator initial settings for the message.
Generally, the cryptanalyst may benefit from lining up identical enciphering operations among a set of messages. For example, the Vernam cipher enciphers by bit-for-bit combining plaintext with a long key using the "exclusive or" operator, which is also known as "modulo-2 addition" (symbolized by ⊕ ):
Plaintext ⊕ Key = Ciphertext
Deciphering combines the same key bits with the ciphertext to reconstruct the plaintext:
Ciphertext ⊕ Key = Plaintext
(In modulo-2 arithmetic, addition is the same as subtraction.) When two such ciphertexts are aligned in depth, combining them eliminates the common key, leaving just a combination of the two plaintexts:
Ciphertext1 ⊕ Ciphertext2 = Plaintext1 ⊕ Plaintext2
The individual plaintexts can then be worked out linguistically by trying probable words (or phrases), also known as "cribs," at various locations; a correct guess, when combined with the merged plaintext stream, produces intelligible text from the other plaintext component:
(Plaintext1 ⊕ Plaintext2) ⊕ Plaintext1 = Plaintext2
The recovered fragment of the second plaintext can often be extended in one or both directions, and the extra characters can be combined with the merged plaintext stream to extend the first plaintext. Working back and forth between the two plaintexts, using the intelligibility criterion to check guesses, the analyst may recover much or all of the original plaintexts. (With only two plaintexts in depth, the analyst may not know which one corresponds to which ciphertext, but in practice this is not a large problem.) When a recovered plaintext is then combined with its ciphertext, the key is revealed:
Plaintext1 ⊕ Ciphertext1 = Key
Knowledge of a key then allows the analyst to read other messages encrypted with the same key, and knowledge of a set of related keys may allow cryptanalysts to diagnose the system used for constructing them.
Development of modern cryptography
Governments have long recognized the potential benefits of cryptanalysis for intelligence, both military and diplomatic, and established dedicated organizations devoted to breaking the codes and ciphers of other nations, for example, GCHQ and the NSA, organizations which are still very active today.
Even though computation was used to great effect in the cryptanalysis of the Lorenz cipher and other systems during World War II, it also made possible new methods of cryptography orders of magnitude more complex than ever before. Taken as a whole, modern cryptography has become much more impervious to cryptanalysis than the pen-and-paper systems of the past, and now seems to have the upper hand against pure cryptanalysis. The historian David Kahn notes:
Kahn goes on to mention increased opportunities for interception, bugging, side channel attacks, and quantum computers as replacements for the traditional means of cryptanalysis. In 2010, former NSA technical director Brian Snow said that both academic and government cryptographers are "moving very slowly forward in a mature field."
However, any postmortems for cryptanalysis may be premature. While the effectiveness of cryptanalytic methods employed by intelligence agencies remains unknown, many serious attacks against both academic and practical cryptographic primitives have been published in the modern era of computer cryptography:
The block cipher Madryga, proposed in 1984 but not widely used, was found to be susceptible to ciphertext-only attacks in 1998.
FEAL-4, proposed as a replacement for the DES standard encryption algorithm but not widely used, was demolished by a spate of attacks from the academic community, many of which are entirely practical.
The A5/1, A5/2, CMEA, and DECT systems used in mobile and wireless phone technology can all be broken in hours, minutes or even in real-time using widely available computing equipment.
Brute-force keyspace search has broken some real-world ciphers and applications, including single-DES (see EFF DES cracker), 40-bit "export-strength" cryptography, and the DVD Content Scrambling System.
In 2001, Wired Equivalent Privacy (WEP), a protocol used to secure Wi-Fi wireless networks, was shown to be breakable in practice because of a weakness in the RC4 cipher and aspects of the WEP design that made related-key attacks practical. WEP was later replaced by Wi-Fi Protected Access.
In 2008, researchers conducted a proof-of-concept break of SSL using weaknesses in the MD5 hash function and certificate issuer practices that made it possible to exploit collision attacks on hash functions. The certificate issuers involved changed their practices to prevent the attack from being repeated.
Thus, while the best modern ciphers may be far more resistant to cryptanalysis than the Enigma, cryptanalysis and the broader field of information security remain quite active.
Symmetric ciphers
Boomerang attack
Brute-force attack
Davies' attack
Differential cryptanalysis
Impossible differential cryptanalysis
Improbable differential cryptanalysis
Integral cryptanalysis
Linear cryptanalysis
Meet-in-the-middle attack
Mod-n cryptanalysis
Related-key attack
Sandwich attack
Slide attack
XSL attack
Asymmetric ciphers
Asymmetric cryptography (or public-key cryptography) is cryptography that relies on using two (mathematically related) keys; one private, and one public. Such ciphers invariably rely on "hard" mathematical problems as the basis of their security, so an obvious point of attack is to develop methods for solving the problem. The security of two-key cryptography depends on mathematical questions in a way that single-key cryptography generally does not, and conversely links cryptanalysis to wider mathematical research in a new way.
Asymmetric schemes are designed around the (conjectured) difficulty of solving various mathematical problems. If an improved algorithm can be found to solve the problem, then the system is weakened. For example, the security of the Diffie–Hellman key exchange scheme depends on the difficulty of calculating the discrete logarithm. In 1983, Don Coppersmith found a faster way to find discrete logarithms (in certain groups), and thereby requiring cryptographers to use larger groups (or different types of groups). RSA's security depends (in part) upon the difficulty of integer factorization – a breakthrough in factoring would impact the security of RSA.
In 1980, one could factor a difficult 50-digit number at an expense of 1012 elementary computer operations. By 1984 the state of the art in factoring algorithms had advanced to a point where a 75-digit number could be factored in 1012 operations. Advances in computing technology also meant that the operations could be performed much faster, too. Moore's law predicts that computer speeds will continue to increase. Factoring techniques may continue to do so as well, but will most likely depend on mathematical insight and creativity, neither of which has ever been successfully predictable. 150-digit numbers of the kind once used in RSA have been factored. The effort was greater than above, but was not unreasonable on fast modern computers. By the start of the 21st century, 150-digit numbers were no longer considered a large enough key size for RSA. Numbers with several hundred digits were still considered too hard to factor in 2005, though methods will probably continue to improve over time, requiring key size to keep pace or other methods such as elliptic curve cryptography to be used.
Another distinguishing feature of asymmetric schemes is that, unlike attacks on symmetric cryptosystems, any cryptanalysis has the opportunity to make use of knowledge gained from the public key.
Attacking cryptographic hash systems
Birthday attack
Hash function security summary
Rainbow table
Side-channel attacks
Black-bag cryptanalysis
Man-in-the-middle attack
Power analysis
Replay attack
Rubber-hose cryptanalysis
Timing analysis
Quantum computing applications for cryptanalysis
Quantum computers, which are still in the early phases of research, have potential use in cryptanalysis. For example, Shor's Algorithm could factor large numbers in polynomial time, in effect breaking some commonly used forms of public-key encryption.
By using Grover's algorithm on a quantum computer, brute-force key search can be made quadratically faster. However, this could be countered by doubling the key length.
See also
Economics of security
Global surveillance
Information assurance, a term for information security often used in government
Information security, the overarching goal of most cryptography
National Cipher Challenge
Security engineering, the design of applications and protocols
Security vulnerability; vulnerabilities can include cryptographic or other flaws
Topics in cryptography
Zendian Problem
Historic cryptanalysts
Conel Hugh O'Donel Alexander
Charles Babbage
Lambros D. Callimahos
Joan Clarke
Alastair Denniston
Agnes Meyer Driscoll
Elizebeth Friedman
William F. Friedman
Meredith Gardner
Friedrich Kasiski
Al-Kindi
Dilly Knox
Solomon Kullback
Marian Rejewski
Joseph Rochefort, whose contributions affected the outcome of the Battle of Midway
Frank Rowlett
Abraham Sinkov
Giovanni Soro, the Renaissance's first outstanding cryptanalyst
John Tiltman
Alan Turing
William T. Tutte
John Wallis – 17th-century English mathematician
William Stone Weedon – worked with Fredson Bowers in World War II
Herbert Yardley
References
Citations
Sources
Ibrahim A. Al-Kadi,"The origins of cryptology: The Arab contributions", Cryptologia, 16(2) (April 1992) pp. 97–126.
Friedrich L. Bauer: "Decrypted Secrets". Springer 2002.
Helen Fouché Gaines, "Cryptanalysis", 1939, Dover.
David Kahn, "The Codebreakers – The Story of Secret Writing", 1967.
Lars R. Knudsen: Contemporary Block Ciphers. Lectures on Data Security 1998: 105–126
Abraham Sinkov, Elementary Cryptanalysis: A Mathematical Approach, Mathematical Association of America, 1966.
Christopher Swenson, Modern Cryptanalysis: Techniques for Advanced Code Breaking,
Friedman, William F., Military Cryptanalysis, Part I,
Friedman, William F., Military Cryptanalysis, Part II,
Friedman, William F., Military Cryptanalysis, Part III, Simpler Varieties of Aperiodic Substitution Systems,
Friedman, William F., Military Cryptanalysis, Part IV, Transposition and Fractionating Systems,
Friedman, William F. and Lambros D. Callimahos, Military Cryptanalytics, Part I, Volume 1,
Friedman, William F. and Lambros D. Callimahos, Military Cryptanalytics, Part I, Volume 2,
Friedman, William F. and Lambros D. Callimahos, Military Cryptanalytics, Part II, Volume 1,
Friedman, William F. and Lambros D. Callimahos, Military Cryptanalytics, Part II, Volume 2,
in
Transcript of a lecture given by Prof. Tutte at the University of Waterloo
Further reading
External links
Basic Cryptanalysis (files contain 5 line header, that has to be removed first)
Distributed Computing Projects
List of tools for cryptanalysis on modern cryptography
Simon Singh's crypto corner
The National Museum of Computing
UltraAnvil tool for attacking simple substitution ciphers
How Alan Turing Cracked The Enigma Code Imperial War Museums
Cryptographic attacks
Applied mathematics
Arab inventions |
5749 | https://en.wikipedia.org/wiki/Key%20size | Key size | In cryptography, key size, key length, or key space refer to the number of bits in a key used by a cryptographic algorithm (such as a cipher).
Key length defines the upper-bound on an algorithm's security (i.e. a logarithmic measure of the fastest known attack against an algorithm), since the security of all algorithms can be violated by brute-force attacks. Ideally, the lower-bound on an algorithm's security is by design equal to the key length (that is, the security is determined entirely by the keylength, or in other words, the algorithm's design does not detract from the degree of security inherent in the key length). Indeed, most symmetric-key algorithms are designed to have security equal to their key length. However, after design, a new attack might be discovered. For instance, Triple DES was designed to have a 168-bit key, but an attack of complexity 2112 is now known (i.e. Triple DES now only has 112 bits of security, and of the 168 bits in the key the attack has rendered 56 'ineffective' towards security). Nevertheless, as long as the security (understood as "the amount of effort it would take to gain access") is sufficient for a particular application, then it does not matter if key length and security coincide. This is important for asymmetric-key algorithms, because no such algorithm is known to satisfy this property; elliptic curve cryptography comes the closest with an effective security of roughly half its key length.
Significance
Keys are used to control the operation of a cipher so that only the correct key can convert encrypted text (ciphertext) to plaintext. Many ciphers are actually based on publicly known algorithms or are open source and so it is only the difficulty of obtaining the key that determines security of the system, provided that there is no analytic attack (i.e. a "structural weakness" in the algorithms or protocols used), and assuming that the key is not otherwise available (such as via theft, extortion, or compromise of computer systems). The widely accepted notion that the security of the system should depend on the key alone has been explicitly formulated by Auguste Kerckhoffs (in the 1880s) and Claude Shannon (in the 1940s); the statements are known as Kerckhoffs' principle and Shannon's Maxim respectively.
A key should, therefore, be large enough that a brute-force attack (possible against any encryption algorithm) is infeasible – i.e. would take too long to execute. Shannon's work on information theory showed that to achieve so-called 'perfect secrecy', the key length must be at least as large as the message and only used once (this algorithm is called the one-time pad). In light of this, and the practical difficulty of managing such long keys, modern cryptographic practice has discarded the notion of perfect secrecy as a requirement for encryption, and instead focuses on computational security, under which the computational requirements of breaking an encrypted text must be infeasible for an attacker.
Key size and encryption system
Encryption systems are often grouped into families. Common families include symmetric systems (e.g. AES) and asymmetric systems (e.g. RSA); they may alternatively be grouped according to the central algorithm used (e.g. elliptic curve cryptography). As each of these is of a different level of cryptographic complexity, it is usual to have different key sizes for the same level of security, depending upon the algorithm used. For example, the security available with a 1024-bit key using asymmetric RSA is considered approximately equal in security to an 80-bit key in a symmetric algorithm.
The actual degree of security achieved over time varies, as more computational power and more powerful mathematical analytic methods become available. For this reason, cryptologists tend to look at indicators that an algorithm or key length shows signs of potential vulnerability, to move to longer key sizes or more difficult algorithms. For example, , a 1039-bit integer was factored with the special number field sieve using 400 computers over 11 months. The factored number was of a special form; the special number field sieve cannot be used on RSA keys. The computation is roughly equivalent to breaking a 700 bit RSA key. However, this might be an advance warning that 1024 bit RSA used in secure online commerce should be deprecated, since they may become breakable in the near future. Cryptography professor Arjen Lenstra observed that "Last time, it took nine years for us to generalize from a special to a nonspecial, hard-to-factor number" and when asked whether 1024-bit RSA keys are dead, said: "The answer to that question is an unqualified yes."
The 2015 Logjam attack revealed additional dangers in using Diffie-Hellman key exchange when only one or a few common 1024-bit or smaller prime moduli are in use. This common practice allows large amounts of communications to be compromised at the expense of attacking a small number of primes.
Brute-force attack
Even if a symmetric cipher is currently unbreakable by exploiting structural weaknesses in its algorithm, it is possible to run through the entire space of keys in what is known as a brute-force attack. Since longer symmetric keys require exponentially more work to brute force search, a sufficiently long symmetric key makes this line of attack impractical.
With a key of length n bits, there are 2n possible keys. This number grows very rapidly as n increases. The large number of operations (2128) required to try all possible 128-bit keys is widely considered out of reach for conventional digital computing techniques for the foreseeable future. However, experts anticipate alternative computing technologies that may have processing power superior to current computer technology. If a suitably sized quantum computer capable of running Grover's algorithm reliably becomes available, it would reduce a 128-bit key down to 64-bit security, roughly a DES equivalent. This is one of the reasons why AES supports a 256-bit key length.
Symmetric algorithm key lengths
US Government export policy has long restricted the "strength" of cryptography that can be sent out of the country. For many years the limit was 40 bits. Today, a key length of 40 bits offers little protection against even a casual attacker with a single PC. In response, by the year 2000, most of the major US restrictions on the use of strong encryption were relaxed. However, not all regulations have been removed, and encryption registration with the U.S. Bureau of Industry and Security is still required to export "mass market encryption commodities, software and components with encryption exceeding 64 bits" ().
IBM's Lucifer cipher was selected in 1974 as the base for what would become the Data Encryption Standard. Lucifer's key length was reduced from 128 bits to 56 bits, which the NSA and NIST argued was sufficient. The NSA has major computing resources and a large budget; some cryptographers including Whitfield Diffie and Martin Hellman complained that this made the cipher so weak that NSA computers would be able to break a DES key in a day through brute force parallel computing. The NSA disputed this, claiming that brute-forcing DES would take them "something like 91 years". However, by the late 90s, it became clear that DES could be cracked in a few days' time-frame with custom-built hardware such as could be purchased by a large corporation or government. The book Cracking DES (O'Reilly and Associates) tells of the successful attempt in 1998 to break 56-bit DES by a brute-force attack mounted by a cyber civil rights group with limited resources; see EFF DES cracker. Even before that demonstration, 56 bits was considered insufficient length for symmetric algorithm keys; DES has been replaced in many applications by Triple DES, which has 112 bits of security when used 168-bit keys (triple key). In 2002, Distributed.net and its volunteers broke a 64-bit RC5 key after several years effort, using about seventy thousand (mostly home) computers.
The Advanced Encryption Standard published in 2001 uses key sizes of 128, 192 or 256 bits. Many observers consider 128 bits sufficient for the foreseeable future for symmetric algorithms of AES's quality until quantum computers become available. However, as of 2015, the U.S. National Security Agency has issued guidance that it plans to switch to quantum computing resistant algorithms and now requires 256-bit AES keys for data classified up to Top Secret.
In 2003, the U.S. National Institute for Standards and Technology, NIST proposed phasing out 80-bit keys by 2015. At 2005, 80-bit keys were allowed only until 2010.
Since 2015, NIST guidance says that "the use of keys that provide less than 112 bits of security strength for key agreement is now disallowed." NIST approved symmetric encryption algorithms include three-key Triple DES, and AES. Approvals for two-key Triple DES and Skipjack were withdrawn in 2015; the NSA's Skipjack algorithm used in its Fortezza program employs 80-bit keys.
Asymmetric algorithm key lengths
The effectiveness of public key cryptosystems depends on the intractability (computational and theoretical) of certain mathematical problems such as integer factorization. These problems are time-consuming to solve, but usually faster than trying all possible keys by brute force. Thus, asymmetric keys must be longer for equivalent resistance to attack than symmetric algorithm keys. The most common methods are assumed to be weak against sufficiently powerful quantum computers in the future.
Since 2015, NIST recommends a minimum of 2048-bit keys for RSA, an update to the widely-accepted recommendation of a 1024-bit minimum since at least 2002.
1024-bit RSA keys are equivalent in strength to 80-bit symmetric keys, 2048-bit RSA keys to 112-bit symmetric keys, 3072-bit RSA keys to 128-bit symmetric keys, and 15360-bit RSA keys to 256-bit symmetric keys. In 2003, RSA Security claimed that 1024-bit keys were likely to become crackable some time between 2006 and 2010, while 2048-bit keys are sufficient until 2030. the largest RSA key publicly known to be cracked is RSA-250 with 829 bits.
The Finite Field Diffie-Hellman algorithm has roughly the same key strength as RSA for the same key sizes. The work factor for breaking Diffie-Hellman is based on the discrete logarithm problem, which is related to the integer factorization problem on which RSA's strength is based. Thus, a 2048-bit Diffie-Hellman key has about the same strength as a 2048-bit RSA key.
Elliptic-curve cryptography (ECC) is an alternative set of asymmetric algorithms that is equivalently secure with shorter keys, requiring only approximately twice the bits as the equivalent symmetric algorithm. A 256-bit ECDH key has approximately the same safety factor as a 128-bit AES key. A message encrypted with an elliptic key algorithm using a 109-bit long key was broken in 2004.
The NSA previously recommended 256-bit ECC for protecting classified information up to the SECRET level, and 384-bit for TOP SECRET; In 2015 it announced plans to transition to quantum-resistant algorithms by 2024, and until then recommends 384-bit for all classified information.
Effect of quantum computing attacks on key strength
The two best known quantum computing attacks are based on Shor's algorithm and Grover's algorithm. Of the two, Shor's offers the greater risk to current security systems.
Derivatives of Shor's algorithm are widely conjectured to be effective against all mainstream public-key algorithms including RSA, Diffie-Hellman and elliptic curve cryptography. According to Professor Gilles Brassard, an expert in quantum computing: "The time needed to factor an RSA integer is the same order as the time needed to use that same integer as modulus for a single RSA encryption. In other words, it takes no more time to break RSA on a quantum computer (up to a multiplicative constant) than to use it legitimately on a classical computer." The general consensus is that these public key algorithms are insecure at any key size if sufficiently large quantum computers capable of running Shor's algorithm become available. The implication of this attack is that all data encrypted using current standards based security systems such as the ubiquitous SSL used to protect e-commerce and Internet banking and SSH used to protect access to sensitive computing systems is at risk. Encrypted data protected using public-key algorithms can be archived and may be broken at a later time.
Mainstream symmetric ciphers (such as AES or Twofish) and collision resistant hash functions (such as SHA) are widely conjectured to offer greater security against known quantum computing attacks. They are widely thought most vulnerable to Grover's algorithm. Bennett, Bernstein, Brassard, and Vazirani proved in 1996 that a brute-force key search on a quantum computer cannot be faster than roughly 2n/2 invocations of the underlying cryptographic algorithm, compared with roughly 2n in the classical case. Thus in the presence of large quantum computers an n-bit key can provide at least n/2 bits of security. Quantum brute force is easily defeated by doubling the key length, which has little extra computational cost in ordinary use. This implies that at least a 256-bit symmetric key is required to achieve 128-bit security rating against a quantum computer. As mentioned above, the NSA announced in 2015 that it plans to transition to quantum-resistant algorithms.
According to the NSA:
, the NSA's Commercial National Security Algorithm Suite includes:
See also
Key stretching
Notes
References
General
Recommendation for Key Management — Part 1: general, NIST Special Publication 800-57. March, 2007
Blaze, Matt; Diffie, Whitfield; Rivest, Ronald L.; et al. "Minimal Key Lengths for Symmetric Ciphers to Provide Adequate Commercial Security". January, 1996
Arjen K. Lenstra, Eric R. Verheul: Selecting Cryptographic Key Sizes. J. Cryptology 14(4): 255-293 (2001) — Citeseer link
External links
www.keylength.com: An online keylength calculator
Articles discussing the implications of quantum computing
NIST cryptographic toolkit
Burt Kaliski: TWIRL and RSA key sizes (May 2003)
Key management |
6115 | https://en.wikipedia.org/wiki/P%20versus%20NP%20problem | P versus NP problem | The P versus NP problem is a major unsolved problem in computer science. It asks whether every problem whose solution can be quickly verified can also be solved quickly.
The informal term quickly, used above, means the existence of an algorithm solving the task that runs in polynomial time, such that the time to complete the task varies as a polynomial function on the size of the input to the algorithm (as opposed to, say, exponential time). The general class of questions for which some algorithm can provide an answer in polynomial time is "P" or "class P". For some questions, there is no known way to find an answer quickly, but if one is provided with information showing what the answer is, it is possible to verify the answer quickly. The class of questions for which an answer can be verified in polynomial time is NP, which stands for "nondeterministic polynomial time".
An answer to the P versus NP question would determine whether problems that can be verified in polynomial time can also be solved in polynomial time. If it turns out that P ≠ NP, which is widely believed, it would mean that there are problems in NP that are harder to compute than to verify: they could not be solved in polynomial time, but the answer could be verified in polynomial time.
The problem is considered by many to be the most important open problem in computer science. Aside from being an important problem in computational theory, a proof either way would have profound implications for mathematics, cryptography, algorithm research, artificial intelligence, game theory, multimedia processing, philosophy, economics and many other fields.
It is one of the seven Millennium Prize Problems selected by the Clay Mathematics Institute, each of which carries a US$1,000,000 prize for the first correct solution.
Example
Consider Sudoku, a game where the player is given a partially filled-in grid of numbers and attempts to complete the grid following certain rules. Given an incomplete Sudoku grid, of any size, is there at least one legal solution? Any proposed solution is easily verified, and the time to check a solution grows slowly (polynomially) as the grid gets bigger. However, all known algorithms for finding solutions take, for difficult examples, time that grows exponentially as the grid gets bigger. So, Sudoku is in NP (quickly checkable) but does not seem to be in P (quickly solvable). Thousands of other problems seem similar, in that they are fast to check but slow to solve. Researchers have shown that many of the problems in NP have the extra property that a fast solution to any one of them could be used to build a quick solution to any other problem in NP, a property called NP-completeness. Decades of searching have not yielded a fast solution to any of these problems, so most scientists suspect that none of these problems can be solved quickly. This, however, has never been proven.
History
The precise statement of the P versus NP problem was introduced in 1971 by Stephen Cook in his seminal paper "The complexity of theorem proving procedures" (and independently by Leonid Levin in 1973).
Although the P versus NP problem was formally defined in 1971, there were previous inklings of the problems involved, the difficulty of proof, and the potential consequences. In 1955, mathematician John Nash wrote a letter to the NSA, where he speculated that cracking a sufficiently complex code would require time exponential in the length of the key. If proved (and Nash was suitably skeptical) this would imply what is now called P ≠ NP, since a proposed key can easily be verified in polynomial time. Another mention of the underlying problem occurred in a 1956 letter written by Kurt Gödel to John von Neumann. Gödel asked whether theorem-proving (now known to be co-NP-complete) could be solved in quadratic or linear time, and pointed out one of the most important consequences—that if so, then the discovery of mathematical proofs could be automated.
Context
The relation between the complexity classes P and NP is studied in computational complexity theory, the part of the theory of computation dealing with the resources required during computation to solve a given problem. The most common resources are time (how many steps it takes to solve a problem) and space (how much memory it takes to solve a problem).
In such analysis, a model of the computer for which time must be analyzed is required. Typically such models assume that the computer is deterministic (given the computer's present state and any inputs, there is only one possible action that the computer might take) and sequential (it performs actions one after the other).
In this theory, the class P consists of all those decision problems (defined below) that can be solved on a deterministic sequential machine in an amount of time that is polynomial in the size of the input; the class NP consists of all those decision problems whose positive solutions can be verified in polynomial time given the right information, or equivalently, whose solution can be found in polynomial time on a non-deterministic machine. Clearly, P ⊆ NP. Arguably, the biggest open question in theoretical computer science concerns the relationship between those two classes:
Is P equal to NP?
Since 2002, William Gasarch has conducted three polls of researchers concerning this and related questions. Confidence that P ≠ NP has been increasing – in 2019, 88% believed P ≠ NP, as opposed to 83% in 2012 and 61% in 2002. When restricted to experts, the 2019 answers became 99% believe P ≠ NP. These polls do not imply anything about whether P=NP is true, as stated by Gasarch himself: ″This does not bring us any closer to solving P=?NP or to knowing when it will be solved, but it attempts to be an objective report on the subjective opinion of this era.″
NP-completeness
To attack the P = NP question, the concept of NP-completeness is very useful. NP-complete problems are a set of problems to each of which any other NP-problem can be reduced in polynomial time and whose solution may still be verified in polynomial time. That is, any NP problem can be transformed into any of the NP-complete problems. Informally, an NP-complete problem is an NP problem that is at least as "tough" as any other problem in NP.
NP-hard problems are those at least as hard as NP problems; i.e., all NP problems can be reduced (in polynomial time) to them. NP-hard problems need not be in NP; i.e., they need not have solutions verifiable in polynomial time.
For instance, the Boolean satisfiability problem is NP-complete by the Cook–Levin theorem, so any instance of any problem in NP can be transformed mechanically into an instance of the Boolean satisfiability problem in polynomial time. The Boolean satisfiability problem is one of many such NP-complete problems. If any NP-complete problem is in P, then it would follow that P = NP. However, many important problems have been shown to be NP-complete, and no fast algorithm for any of them is known.
Based on the definition alone it is not obvious that NP-complete problems exist; however, a trivial and contrived NP-complete problem can be formulated as follows: given a description of a Turing machine M guaranteed to halt in polynomial time, does there exist a polynomial-size input that M will accept? It is in NP because (given an input) it is simple to check whether M accepts the input by simulating M; it is NP-complete because the verifier for any particular instance of a problem in NP can be encoded as a polynomial-time machine M that takes the solution to be verified as input. Then the question of whether the instance is a yes or no instance is determined by whether a valid input exists.
The first natural problem proven to be NP-complete was the Boolean satisfiability problem, also known as SAT. As noted above, this is the Cook–Levin theorem; its proof that satisfiability is NP-complete contains technical details about Turing machines as they relate to the definition of NP. However, after this problem was proved to be NP-complete, proof by reduction provided a simpler way to show that many other problems are also NP-complete, including the game Sudoku discussed earlier. In this case, the proof shows that a solution of Sudoku in polynomial time could also be used to complete Latin squares in polynomial time. This in turn gives a solution to the problem of partitioning tri-partite graphs into triangles, which could then be used to find solutions for the special case of SAT known as 3-SAT, which then provides a solution for general Boolean satisfiability. So a polynomial-time solution to Sudoku leads, by a series of mechanical transformations, to a polynomial time solution of satisfiability, which in turn can be used to solve any other NP-problem in polynomial time. Using transformations like this, a vast class of seemingly unrelated problems are all reducible to one another, and are in a sense "the same problem".
Harder problems
Although it is unknown whether P = NP, problems outside of P are known. Just as the class P is defined in terms of polynomial running time, the class EXPTIME is the set of all decision problems that have exponential running time. In other words, any problem in EXPTIME is solvable by a deterministic Turing machine in O(2p(n)) time, where p(n) is a polynomial function of n. A decision problem is EXPTIME-complete if it is in EXPTIME, and every problem in EXPTIME has a polynomial-time many-one reduction to it. A number of problems are known to be EXPTIME-complete. Because it can be shown that P ≠ EXPTIME, these problems are outside P, and so require more than polynomial time. In fact, by the time hierarchy theorem, they cannot be solved in significantly less than exponential time. Examples include finding a perfect strategy for chess positions on an N × N board and similar problems for other board games.
The problem of deciding the truth of a statement in Presburger arithmetic requires even more time. Fischer and Rabin proved in 1974 that every algorithm that decides the truth of Presburger statements of length n has a runtime of at least for some constant c. Hence, the problem is known to need more than exponential run time. Even more difficult are the undecidable problems, such as the halting problem. They cannot be completely solved by any algorithm, in the sense that for any particular algorithm there is at least one input for which that algorithm will not produce the right answer; it will either produce the wrong answer, finish without giving a conclusive answer, or otherwise run forever without producing any answer at all.
It is also possible to consider questions other than decision problems. One such class, consisting of counting problems, is called #P: whereas an NP problem asks "Are there any solutions?", the corresponding #P problem asks "How many solutions are there?" Clearly, a #P problem must be at least as hard as the corresponding NP problem, since a count of solutions immediately tells if at least one solution exists, if the count is greater than zero. Surprisingly, some #P problems that are believed to be difficult correspond to easy (for example linear-time) P problems. For these problems, it is very easy to tell whether solutions exist, but thought to be very hard to tell how many. Many of these problems are #P-complete, and hence among the hardest problems in #P, since a polynomial time solution to any of them would allow a polynomial time solution to all other #P problems.
Problems in NP not known to be in P or NP-complete
In 1975, Richard E. Ladner showed that if P ≠ NP then there exist problems in NP that are neither in P nor NP-complete. Such problems are called NP-intermediate problems. The graph isomorphism problem, the discrete logarithm problem and the integer factorization problem are examples of problems believed to be NP-intermediate. They are some of the very few NP problems not known to be in P or to be NP-complete.
The graph isomorphism problem is the computational problem of determining whether two finite graphs are isomorphic. An important unsolved problem in complexity theory is whether the graph isomorphism problem is in P, NP-complete, or NP-intermediate. The answer is not known, but it is believed that the problem is at least not NP-complete. If graph isomorphism is NP-complete, the polynomial time hierarchy collapses to its second level. Since it is widely believed that the polynomial hierarchy does not collapse to any finite level, it is believed that graph isomorphism is not NP-complete. The best algorithm for this problem, due to László Babai, runs in quasi-polynomial time.
The integer factorization problem is the computational problem of determining the prime factorization of a given integer. Phrased as a decision problem, it is the problem of deciding whether the input has a factor less than k. No efficient integer factorization algorithm is known, and this fact forms the basis of several modern cryptographic systems, such as the RSA algorithm. The integer factorization problem is in NP and in co-NP (and even in UP and co-UP). If the problem is NP-complete, the polynomial time hierarchy will collapse to its first level (i.e., NP = co-NP). The most efficient known algorithm for integer factorization is the general number field sieve, which takes expected time
to factor an n-bit integer. However, the best known quantum algorithm for this problem, Shor's algorithm, does run in polynomial time, although this does not indicate where the problem lies with respect to non-quantum complexity classes.
Does P mean "easy"?
All of the above discussion has assumed that P means "easy" and "not in P" means "difficult", an assumption known as Cobham's thesis. It is a common and reasonably accurate assumption in complexity theory; however, it has some caveats.
First, it is not always true in practice. A theoretical polynomial algorithm may have extremely large constant factors or exponents, thus rendering it impractical. For example, the problem of deciding whether a graph G contains H as a minor, where H is fixed, can be solved in a running time of O(n2), where n is the number of vertices in G. However, the big O notation hides a constant that depends superexponentially on H. The constant is greater than (using Knuth's up-arrow notation), and where h is the number of vertices in H.
On the other hand, even if a problem is shown to be NP-complete, and even if P ≠ NP, there may still be effective approaches to tackling the problem in practice. There are algorithms for many NP-complete problems, such as the knapsack problem, the traveling salesman problem and the Boolean satisfiability problem, that can solve to optimality many real-world instances in reasonable time. The empirical average-case complexity (time vs. problem size) of such algorithms can be surprisingly low. An example is the simplex algorithm in linear programming, which works surprisingly well in practice; despite having exponential worst-case time complexity, it runs on par with the best known polynomial-time algorithms.
Finally, there are types of computations which do not conform to the Turing machine model on which P and NP are defined, such as quantum computation and randomized algorithms.
Reasons to believe P ≠ NP or P = NP
Cook provides a restatement of the problem in THE P VERSUS NP PROBLEM as: Does P = NP ?. According to polls, most computer scientists believe that P ≠ NP. A key reason for this belief is that after decades of studying these problems no one has been able to find a polynomial-time algorithm for any of more than 3000 important known NP-complete problems (see List of NP-complete problems). These algorithms were sought long before the concept of NP-completeness was even defined (Karp's 21 NP-complete problems, among the first found, were all well-known existing problems at the time they were shown to be NP-complete). Furthermore, the result P = NP would imply many other startling results that are currently believed to be false, such as NP = co-NP and P = PH.
It is also intuitively argued that the existence of problems that are hard to solve but for which the solutions are easy to verify matches real-world experience.
On the other hand, some researchers believe that there is overconfidence in believing P ≠ NP and that researchers should explore proofs of P = NP as well. For example, in 2002 these statements were made:
Consequences of solution
One of the reasons the problem attracts so much attention is the consequences of the possible answers. Either direction of resolution would advance theory enormously, and perhaps have huge practical consequences as well.
P = NP
A proof that P = NP could have stunning practical consequences if the proof leads to efficient methods for solving some of the important problems in NP. The potential consequences, both positive and negative, arise since various NP-complete problems are fundamental in many fields.
It is also very possible that a proof would not lead to practical algorithms for NP-complete problems. The formulation of the problem does not require that the bounding polynomial be small or even specifically known. A non-constructive proof might show a solution exists without specifying either an algorithm to obtain it or a specific bound. Even if the proof is constructive, showing an explicit bounding polynomial and algorithmic details, if the polynomial is not very low-order the algorithm might not be sufficiently efficient in practice. In this case the initial proof would be mainly of interest to theoreticians, but the knowledge that a polynomial time solutions are possible would surely spur research into better (and possibly practical) methods to achieve them.
An example of a field that could be upended by a solution showing P = NP is cryptography, which relies on certain problems being difficult. A constructive and efficient solution to an NP-complete problem such as 3-SAT would break most existing cryptosystems including:
Existing implementations of public-key cryptography, a foundation for many modern security applications such as secure financial transactions over the Internet.
Symmetric ciphers such as AES or 3DES, used for the encryption of communications data.
Cryptographic hashing, which underlies blockchain cryptocurrencies such as Bitcoin, and is used to authenticate software updates. For these applications, the problem of finding a pre-image that hashes to a given value must be difficult in order to be useful, and ideally should require exponential time. However, if P = NP, then finding a pre-image M can be done in polynomial time, through reduction to SAT.
These would need to be modified or replaced by information-theoretically secure solutions not inherently based on P-NP inequivalence.
On the other hand, there are enormous positive consequences that would follow from rendering tractable many currently mathematically intractable problems. For instance, many problems in operations research are NP-complete, such as some types of integer programming and the travelling salesman problem. Efficient solutions to these problems would have enormous implications for logistics. Many other important problems, such as some problems in protein structure prediction, are also NP-complete; if these problems were efficiently solvable, it could spur considerable advances in life sciences and biotechnology.
But such changes may pale in significance compared to the revolution an efficient method for solving NP-complete problems would cause in mathematics itself. Gödel, in his early thoughts on computational complexity, noted that a mechanical method that could solve any problem would revolutionize mathematics:
Similarly, Stephen Cook (assuming not only a proof, but a practically efficient algorithm) says
Research mathematicians spend their careers trying to prove theorems, and some proofs have taken decades or even centuries to find after problems have been stated—for instance, Fermat's Last Theorem took over three centuries to prove. A method that is guaranteed to find proofs to theorems, should one exist of a "reasonable" size, would essentially end this struggle.
Donald Knuth has stated that he has come to believe that P = NP, but is reserved about the impact of a possible proof:
P ≠ NP
A proof that showed that P ≠ NP would lack the practical computational benefits of a proof that P = NP, but would nevertheless represent a very significant advance in computational complexity theory and provide guidance for future research. It would allow one to show in a formal way that many common problems cannot be solved efficiently, so that the attention of researchers can be focused on partial solutions or solutions to other problems. Due to widespread belief in P ≠ NP, much of this focusing of research has already taken place.
Also P ≠ NP still leaves open the average-case complexity of hard problems in NP. For example, it is possible that SAT requires exponential time in the worst case, but that almost all randomly selected instances of it are efficiently solvable. Russell Impagliazzo has described five hypothetical "worlds" that could result from different possible resolutions to the average-case complexity question. These range from "Algorithmica", where P = NP and problems like SAT can be solved efficiently in all instances, to "Cryptomania", where P ≠ NP and generating hard instances of problems outside P is easy, with three intermediate possibilities reflecting different possible distributions of difficulty over instances of NP-hard problems. The "world" where P ≠ NP but all problems in NP are tractable in the average case is called "Heuristica" in the paper. A Princeton University workshop in 2009 studied the status of the five worlds.
Results about difficulty of proof
Although the P = NP problem itself remains open despite a million-dollar prize and a huge amount of dedicated research, efforts to solve the problem have led to several new techniques. In particular, some of the most fruitful research related to the P = NP problem has been in showing that existing proof techniques are not powerful enough to answer the question, thus suggesting that novel technical approaches are required.
As additional evidence for the difficulty of the problem, essentially all known proof techniques in computational complexity theory fall into one of the following classifications, each of which is known to be insufficient to prove that P ≠ NP:
These barriers are another reason why NP-complete problems are useful: if a polynomial-time algorithm can be demonstrated for an NP-complete problem, this would solve the P = NP problem in a way not excluded by the above results.
These barriers have also led some computer scientists to suggest that the P versus NP problem may be independent of standard axiom systems like ZFC (cannot be proved or disproved within them). The interpretation of an independence result could be that either no polynomial-time algorithm exists for any NP-complete problem, and such a proof cannot be constructed in (e.g.) ZFC, or that polynomial-time algorithms for NP-complete problems may exist, but it is impossible to prove in ZFC that such algorithms are correct. However, if it can be shown, using techniques of the sort that are currently known to be applicable, that the problem cannot be decided even with much weaker assumptions extending the Peano axioms (PA) for integer arithmetic, then there would necessarily exist nearly-polynomial-time algorithms for every problem in NP. Therefore, if one believes (as most complexity theorists do) that not all problems in NP have efficient algorithms, it would follow that proofs of independence using those techniques cannot be possible. Additionally, this result implies that proving independence from PA or ZFC using currently known techniques is no easier than proving the existence of efficient algorithms for all problems in NP.
Claimed solutions
While the P versus NP problem is generally considered unsolved, many amateur and some professional researchers have claimed solutions. Gerhard J. Woeginger maintains a list that, as of 2016, contains 62 purported proofs of P = NP, 50 proofs of P ≠ NP, 2 proofs the problem is unprovable, and one proof that it is undecidable. Some attempts at resolving P versus NP have received brief media attention, though these attempts have since been refuted.
Logical characterizations
The P = NP problem can be restated in terms of expressible certain classes of logical statements, as a result of work in descriptive complexity.
Consider all languages of finite structures with a fixed signature including a linear order relation. Then, all such languages in P can be expressed in first-order logic with the addition of a suitable least fixed-point combinator. Effectively, this, in combination with the order, allows the definition of recursive functions. As long as the signature contains at least one predicate or function in addition to the distinguished order relation, so that the amount of space taken to store such finite structures is actually polynomial in the number of elements in the structure, this precisely characterizes P.
Similarly, NP is the set of languages expressible in existential second-order logic—that is, second-order logic restricted to exclude universal quantification over relations, functions, and subsets. The languages in the polynomial hierarchy, PH, correspond to all of second-order logic. Thus, the question "is P a proper subset of NP" can be reformulated as "is existential second-order logic able to describe languages (of finite linearly ordered structures with nontrivial signature) that first-order logic with least fixed point cannot?". The word "existential" can even be dropped from the previous characterization, since P = NP if and only if P = PH (as the former would establish that NP = co-NP, which in turn implies that NP = PH).
Polynomial-time algorithms
No algorithm for any NP-complete problem is known to run in polynomial time. However, there are algorithms known for NP-complete problems with the property that if P = NP, then the algorithm runs in polynomial time on accepting instances (although with enormous constants, making the algorithm impractical). However, these algorithms do not qualify as polynomial time because their running time on rejecting instances are not polynomial. The following algorithm, due to Levin (without any citation), is such an example below. It correctly accepts the NP-complete language SUBSET-SUM. It runs in polynomial time on inputs that are in SUBSET-SUM if and only if P = NP:
// Algorithm that accepts the NP-complete language SUBSET-SUM.
//
// this is a polynomial-time algorithm if and only if P = NP.
//
// "Polynomial-time" means it returns "yes" in polynomial time when
// the answer should be "yes", and runs forever when it is "no".
//
// Input: S = a finite set of integers
// Output: "yes" if any subset of S adds up to 0.
// Runs forever with no output otherwise.
// Note: "Program number M" is the program obtained by
// writing the integer M in binary, then
// considering that string of bits to be a
// program. Every possible program can be
// generated this way, though most do nothing
// because of syntax errors.
FOR K = 1...∞
FOR M = 1...K
Run program number M for K steps with input S
IF the program outputs a list of distinct integers
AND the integers are all in S
AND the integers sum to 0
THEN
OUTPUT "yes" and HALT
If, and only if, P = NP, then this is a polynomial-time algorithm accepting an NP-complete language. "Accepting" means it gives "yes" answers in polynomial time, but is allowed to run forever when the answer is "no" (also known as a semi-algorithm).
This algorithm is enormously impractical, even if P = NP. If the shortest program that can solve SUBSET-SUM in polynomial time is b bits long, the above algorithm will try at least other programs first.
Formal definitions
P and NP
Conceptually speaking, a decision problem is a problem that takes as input some string w over an alphabet Σ, and outputs "yes" or "no". If there is an algorithm (say a Turing machine, or a computer program with unbounded memory) that can produce the correct answer for any input string of length n in at most cnk steps, where k and c are constants independent of the input string, then we say that the problem can be solved in polynomial time and we place it in the class P. Formally, P is defined as the set of all languages that can be decided by a deterministic polynomial-time Turing machine. That is,
where
and a deterministic polynomial-time Turing machine is a deterministic Turing machine M that satisfies the following two conditions:
M halts on all inputs w and
there exists such that , where O refers to the big O notation and
NP can be defined similarly using nondeterministic Turing machines (the traditional way). However, a modern approach to define NP is to use the concept of certificate and verifier. Formally, NP is defined as the set of languages over a finite alphabet that have a verifier that runs in polynomial time, where the notion of "verifier" is defined as follows.
Let L be a language over a finite alphabet, Σ.
L ∈ NP if, and only if, there exists a binary relation and a positive integer k such that the following two conditions are satisfied:
For all , such that (x, y) ∈ R and ; and
the language over is decidable by a deterministic Turing machine in polynomial time.
A Turing machine that decides LR is called a verifier for L and a y such that (x, y) ∈ R is called a certificate of membership of x in L.
In general, a verifier does not have to be polynomial-time. However, for L to be in NP, there must be a verifier that runs in polynomial time.
Example
Let
Clearly, the question of whether a given x is a composite is equivalent to the question of whether x is a member of COMPOSITE. It can be shown that COMPOSITE ∈ NP by verifying that it satisfies the above definition (if we identify natural numbers with their binary representations).
COMPOSITE also happens to be in P, a fact demonstrated by the invention of the AKS primality test.
NP-completeness
There are many equivalent ways of describing NP-completeness.
Let L be a language over a finite alphabet Σ.
L is NP-complete if, and only if, the following two conditions are satisfied:
L ∈ NP; and
any L in NP is polynomial-time-reducible to L (written as ), where if, and only if, the following two conditions are satisfied:
There exists f : Σ* → Σ* such that for all w in Σ* we have: ; and
there exists a polynomial-time Turing machine that halts with f(w) on its tape on any input w.
Alternatively, if L ∈ NP, and there is another NP-complete problem that can be polynomial-time reduced to L, then L is NP-complete. This is a common way of proving some new problem is NP-complete.
Popular culture
The film Travelling Salesman, by director Timothy Lanzone, is the story of four mathematicians hired by the US government to solve the P versus NP problem.
In the sixth episode of The Simpsons seventh season "Treehouse of Horror VI", the equation P=NP is seen shortly after Homer accidentally stumbles into the "third dimension".
In the second episode of season 2 of Elementary, "Solve for X" revolves around Sherlock and Watson investigating the murders of mathematicians who were attempting to solve P versus NP.
See also
Game complexity
List of unsolved problems in mathematics
Unique games conjecture
Unsolved problems in computer science
Notes
References
Sources
Further reading
Online drafts
External links
Aviad Rubinstein's Hardness of Approximation Between P and NP, winner of the ACM's 2017 Doctoral Dissertation Award.
1956 in computing
Computer-related introductions in 1956
Conjectures
Mathematical optimization
Millennium Prize Problems
Structural complexity theory
Unsolved problems in computer science
Unsolved problems in mathematics |
6295 | https://en.wikipedia.org/wiki/Chaos%20theory | Chaos theory | Chaos theory is an interdisciplinary scientific theory and branch of mathematics focused on underlying patterns and deterministic laws highly sensitive to initial conditions in dynamical systems that were thought to have completely random states of disorder and irregularities. Chaos theory states that within the apparent randomness of chaotic complex systems, there are underlying patterns, interconnectedness, constant feedback loops, repetition, self-similarity, fractals, and self-organization. The butterfly effect, an underlying principle of chaos, describes how a small change in one state of a deterministic nonlinear system can result in large differences in a later state (meaning that there is sensitive dependence on initial conditions). A metaphor for this behavior is that a butterfly flapping its wings in Brazil can cause a tornado in Texas.
Small differences in initial conditions, such as those due to errors in measurements or due to rounding errors in numerical computation, can yield widely diverging outcomes for such dynamical systems, rendering long-term prediction of their behavior impossible in general. This can happen even though these systems are deterministic, meaning that their future behavior follows a unique evolution and is fully determined by their initial conditions, with no random elements involved. In other words, the deterministic nature of these systems does not make them predictable. This behavior is known as deterministic chaos, or simply chaos. The theory was summarized by Edward Lorenz as:
Chaotic behavior exists in many natural systems, including fluid flow, heartbeat irregularities, weather and climate. It also occurs spontaneously in some systems with artificial components, such as the stock market and road traffic. This behavior can be studied through the analysis of a chaotic mathematical model, or through analytical techniques such as recurrence plots and Poincaré maps. Chaos theory has applications in a variety of disciplines, including meteorology, anthropology, sociology, environmental science, computer science, engineering, economics, ecology, pandemic crisis management,. The theory formed the basis for such fields of study as complex dynamical systems, edge of chaos theory, and self-assembly processes.
Introduction
Chaos theory concerns deterministic systems whose behavior can, in principle, be predicted. Chaotic systems are predictable for a while and then 'appear' to become random. The amount of time that the behavior of a chaotic system can be effectively predicted depends on three things: how much uncertainty can be tolerated in the forecast, how accurately its current state can be measured, and a time scale depending on the dynamics of the system, called the Lyapunov time. Some examples of Lyapunov times are: chaotic electrical circuits, about 1 millisecond; weather systems, a few days (unproven); the inner solar system, 4 to 5 million years. In chaotic systems, the uncertainty in a forecast increases exponentially with elapsed time. Hence, mathematically, doubling the forecast time more than squares the proportional uncertainty in the forecast. This means, in practice, a meaningful prediction cannot be made over an interval of more than two or three times the Lyapunov time. When meaningful predictions cannot be made, the system appears random.
Chaos theory is a method of qualitative and quantitative analysis to investigate the behavior of dynamic systems that cannot be explained and predicted by single data relationships, but must be explained and predicted by whole, continuous data relationships.
Chaotic dynamics
In common usage, "chaos" means "a state of disorder". However, in chaos theory, the term is defined more precisely. Although no universally accepted mathematical definition of chaos exists, a commonly used definition, originally formulated by Robert L. Devaney, says that to classify a dynamical system as chaotic, it must have these properties:
it must be sensitive to initial conditions,
it must be topologically transitive,
it must have dense periodic orbits.
In some cases, the last two properties above have been shown to actually imply sensitivity to initial conditions. In the discrete-time case, this is true for all continuous maps on metric spaces. In these cases, while it is often the most practically significant property, "sensitivity to initial conditions" need not be stated in the definition.
If attention is restricted to intervals, the second property implies the other two. An alternative and a generally weaker definition of chaos uses only the first two properties in the above list.
Chaos as a spontaneous breakdown of topological supersymmetry
In continuous time dynamical systems, chaos is the phenomenon of the spontaneous breakdown of topological supersymmetry, which is an intrinsic property of evolution operators of all stochastic and deterministic (partial) differential equations. This picture of dynamical chaos works not only for deterministic models, but also for models with external noise which is an important generalization from the physical point of view, since in reality, all dynamical systems experience influence from their stochastic environments. Within this picture, the long-range dynamical behavior associated with chaotic dynamics (e.g., the butterfly effect) is a consequence of Goldstone's theorem—in the application to the spontaneous topological supersymmetry breaking.
Sensitivity to initial conditions
Sensitivity to initial conditions means that each point in a chaotic system is arbitrarily closely approximated by other points that have significantly different future paths or trajectories. Thus, an arbitrarily small change or perturbation of the current trajectory may lead to significantly different future behavior.
Sensitivity to initial conditions is popularly known as the "butterfly effect", so-called because of the title of a paper given by Edward Lorenz in 1972 to the American Association for the Advancement of Science in Washington, D.C., entitled Predictability: Does the Flap of a Butterfly's Wings in Brazil set off a Tornado in Texas?. The flapping wing represents a small change in the initial condition of the system, which causes a chain of events that prevents the predictability of large-scale phenomena. Had the butterfly not flapped its wings, the trajectory of the overall system could have been vastly different.
A consequence of sensitivity to initial conditions is that if we start with a limited amount of information about the system (as is usually the case in practice), then beyond a certain time, the system would no longer be predictable. This is most prevalent in the case of weather, which is generally predictable only about a week ahead. This does not mean that one cannot assert anything about events far in the future—only that some restrictions on the system are present. For example, we know that the temperature of the surface of the earth will not naturally reach or fall below on earth (during the current geologic era), but we cannot predict exactly which day will have the hottest temperature of the year.
In more mathematical terms, the Lyapunov exponent measures the sensitivity to initial conditions, in the form of rate of exponential divergence from the perturbed initial conditions. More specifically, given two starting trajectories in the phase space that are infinitesimally close, with initial separation , the two trajectories end up diverging at a rate given by
where is the time and is the Lyapunov exponent. The rate of separation depends on the orientation of the initial separation vector, so a whole spectrum of Lyapunov exponents can exist. The number of Lyapunov exponents is equal to the number of dimensions of the phase space, though it is common to just refer to the largest one. For example, the maximal Lyapunov exponent (MLE) is most often used, because it determines the overall predictability of the system. A positive MLE is usually taken as an indication that the system is chaotic.
In addition to the above property, other properties related to sensitivity of initial conditions also exist. These include, for example, measure-theoretical mixing (as discussed in ergodic theory) and properties of a K-system.
Non-periodicity
A chaotic system may have sequences of values for the evolving variable that exactly repeat themselves, giving periodic behavior starting from any point in that sequence. However, such periodic sequences are repelling rather than attracting, meaning that if the evolving variable is outside the sequence, however close, it will not enter the sequence and in fact, will diverge from it. Thus for almost all initial conditions, the variable evolves chaotically with non-periodic behavior.
Topological mixing
Topological mixing (or the weaker condition of topological transitivity) means that the system evolves over time so that any given region or open set of its phase space eventually overlaps with any other given region. This mathematical concept of "mixing" corresponds to the standard intuition, and the mixing of colored dyes or fluids is an example of a chaotic system.
Topological mixing is often omitted from popular accounts of chaos, which equate chaos with only sensitivity to initial conditions. However, sensitive dependence on initial conditions alone does not give chaos. For example, consider the simple dynamical system produced by repeatedly doubling an initial value. This system has sensitive dependence on initial conditions everywhere, since any pair of nearby points eventually becomes widely separated. However, this example has no topological mixing, and therefore has no chaos. Indeed, it has extremely simple behavior: all points except 0 tend to positive or negative infinity.
Topological transitivity
A map is said to be topologically transitive if for any pair of non-empty open sets , there exists such that . Topological transitivity is a weaker version of topological mixing. Intuitively, if a map is topologically transitive then given a point x and a region V, there exists a point y near x whose orbit passes through V. This implies that is impossible to decompose the system into two open sets.
An important related theorem is the Birkhoff Transitivity Theorem. It is easy to see that the existence of a dense orbit implies in topological transitivity. The Birkhoff Transitivity Theorem states that if X is a second countable, complete metric space, then topological transitivity implies the existence of a dense set of points in X that have dense orbits.
Density of periodic orbits
For a chaotic system to have dense periodic orbits means that every point in the space is approached arbitrarily closely by periodic orbits. The one-dimensional logistic map defined by x → 4 x (1 – x) is one of the simplest systems with density of periodic orbits. For example, → → (or approximately 0.3454915 → 0.9045085 → 0.3454915) is an (unstable) orbit of period 2, and similar orbits exist for periods 4, 8, 16, etc. (indeed, for all the periods specified by Sharkovskii's theorem).
Sharkovskii's theorem is the basis of the Li and Yorke (1975) proof that any continuous one-dimensional system that exhibits a regular cycle of period three will also display regular cycles of every other length, as well as completely chaotic orbits.
Strange attractors
Some dynamical systems, like the one-dimensional logistic map defined by x → 4 x (1 – x), are chaotic everywhere, but in many cases chaotic behavior is found only in a subset of phase space. The cases of most interest arise when the chaotic behavior takes place on an attractor, since then a large set of initial conditions leads to orbits that converge to this chaotic region.
An easy way to visualize a chaotic attractor is to start with a point in the basin of attraction of the attractor, and then simply plot its subsequent orbit. Because of the topological transitivity condition, this is likely to produce a picture of the entire final attractor, and indeed both orbits shown in the figure on the right give a picture of the general shape of the Lorenz attractor. This attractor results from a simple three-dimensional model of the Lorenz weather system. The Lorenz attractor is perhaps one of the best-known chaotic system diagrams, probably because it is not only one of the first, but it is also one of the most complex, and as such gives rise to a very interesting pattern that, with a little imagination, looks like the wings of a butterfly.
Unlike fixed-point attractors and limit cycles, the attractors that arise from chaotic systems, known as strange attractors, have great detail and complexity. Strange attractors occur in both continuous dynamical systems (such as the Lorenz system) and in some discrete systems (such as the Hénon map). Other discrete dynamical systems have a repelling structure called a Julia set, which forms at the boundary between basins of attraction of fixed points. Julia sets can be thought of as strange repellers. Both strange attractors and Julia sets typically have a fractal structure, and the fractal dimension can be calculated for them.
Minimum complexity of a chaotic system
Discrete chaotic systems, such as the logistic map, can exhibit strange attractors whatever their dimensionality. Universality of one-dimensional maps with parabolic maxima and Feigenbaum constants , is well visible with map proposed as a toy
model for discrete laser dynamics:
,
where stands for electric field amplitude, is laser gain as bifurcation parameter. The gradual increase of at interval changes dynamics from regular to chaotic one with qualitatively the same bifurcation diagram as those for logistic map.
In contrast, for continuous dynamical systems, the Poincaré–Bendixson theorem shows that a strange attractor can only arise in three or more dimensions. Finite-dimensional linear systems are never chaotic; for a dynamical system to display chaotic behavior, it must be either nonlinear or infinite-dimensional.
The Poincaré–Bendixson theorem states that a two-dimensional differential equation has very regular behavior. The Lorenz attractor discussed below is generated by a system of three differential equations such as:
where , , and make up the system state, is time, and , , are the system parameters. Five of the terms on the right hand side are linear, while two are quadratic; a total of seven terms. Another well-known chaotic attractor is generated by the Rössler equations, which have only one nonlinear term out of seven. Sprott found a three-dimensional system with just five terms, that had only one nonlinear term, which exhibits chaos for certain parameter values. Zhang and Heidel showed that, at least for dissipative and conservative quadratic systems, three-dimensional quadratic systems with only three or four terms on the right-hand side cannot exhibit chaotic behavior. The reason is, simply put, that solutions to such systems are asymptotic to a two-dimensional surface and therefore solutions are well behaved.
While the Poincaré–Bendixson theorem shows that a continuous dynamical system on the Euclidean plane cannot be chaotic, two-dimensional continuous systems with non-Euclidean geometry can exhibit chaotic behavior. Perhaps surprisingly, chaos may occur also in linear systems, provided they are infinite dimensional. A theory of linear chaos is being developed in a branch of mathematical analysis known as functional analysis.
Infinite dimensional maps
The straightforward generalization of coupled discrete maps is based upon convolution integral which mediates interaction between spatially distributed maps:
,
where kernel is propagator derived as Green function of a relevant physical system,
might be logistic map alike or complex map. For examples of complex maps the Julia set or Ikeda map
may serve. When wave propagation problems at distance with wavelength are considered the kernel may have a form of Green function for Schrödinger equation:.
.
Jerk systems
In physics, jerk is the third derivative of position, with respect to time. As such, differential equations of the form
are sometimes called jerk equations. It has been shown that a jerk equation, which is equivalent to a system of three first order, ordinary, non-linear differential equations, is in a certain sense the minimal setting for solutions showing chaotic behaviour. This motivates mathematical interest in jerk systems. Systems involving a fourth or higher derivative are called accordingly hyperjerk systems.
A jerk system's behavior is described by a jerk equation, and for certain jerk equations, simple electronic circuits can model solutions. These circuits are known as jerk circuits.
One of the most interesting properties of jerk circuits is the possibility of chaotic behavior. In fact, certain well-known chaotic systems, such as the Lorenz attractor and the Rössler map, are conventionally described as a system of three first-order differential equations that can combine into a single (although rather complicated) jerk equation. Another example of a jerk equation with nonlinearity in the magnitude of is:
Here, A is an adjustable parameter. This equation has a chaotic solution for A=3/5 and can be implemented with the following jerk circuit; the required nonlinearity is brought about by the two diodes:
In the above circuit, all resistors are of equal value, except , and all capacitors are of equal size. The dominant frequency is . The output of op amp 0 will correspond to the x variable, the output of 1 corresponds to the first derivative of x and the output of 2 corresponds to the second derivative.
Similar circuits only require one diode or no diodes at all.
See also the well-known Chua's circuit, one basis for chaotic true random number generators. The ease of construction of the circuit has made it a ubiquitous real-world example of a chaotic system.
Spontaneous order
Under the right conditions, chaos spontaneously evolves into a lockstep pattern. In the Kuramoto model, four conditions suffice to produce synchronization in a chaotic system.
Examples include the coupled oscillation of Christiaan Huygens' pendulums, fireflies, neurons, the London Millennium Bridge resonance, and large arrays of Josephson junctions.
History
An early proponent of chaos theory was Henri Poincaré. In the 1880s, while studying the three-body problem, he found that there can be orbits that are nonperiodic, and yet not forever increasing nor approaching a fixed point. In 1898, Jacques Hadamard published an influential study of the chaotic motion of a free particle gliding frictionlessly on a surface of constant negative curvature, called "Hadamard's billiards". Hadamard was able to show that all trajectories are unstable, in that all particle trajectories diverge exponentially from one another, with a positive Lyapunov exponent.
Chaos theory began in the field of ergodic theory. Later studies, also on the topic of nonlinear differential equations, were carried out by George David Birkhoff, Andrey Nikolaevich Kolmogorov, Mary Lucy Cartwright and John Edensor Littlewood, and Stephen Smale. Except for Smale, these studies were all directly inspired by physics: the three-body problem in the case of Birkhoff, turbulence and astronomical problems in the case of Kolmogorov, and radio engineering in the case of Cartwright and Littlewood. Although chaotic planetary motion had not been observed, experimentalists had encountered turbulence in fluid motion and nonperiodic oscillation in radio circuits without the benefit of a theory to explain what they were seeing.
Despite initial insights in the first half of the twentieth century, chaos theory became formalized as such only after mid-century, when it first became evident to some scientists that linear theory, the prevailing system theory at that time, simply could not explain the observed behavior of certain experiments like that of the logistic map. What had been attributed to measure imprecision and simple "noise" was considered by chaos theorists as a full component of the studied systems.
The main catalyst for the development of chaos theory was the electronic computer. Much of the mathematics of chaos theory involves the repeated iteration of simple mathematical formulas, which would be impractical to do by hand. Electronic computers made these repeated calculations practical, while figures and images made it possible to visualize these systems. As a graduate student in Chihiro Hayashi's laboratory at Kyoto University, Yoshisuke Ueda was experimenting with analog computers and noticed, on November 27, 1961, what he called "randomly transitional phenomena". Yet his advisor did not agree with his conclusions at the time, and did not allow him to report his findings until 1970.
Edward Lorenz was an early pioneer of the theory. His interest in chaos came about accidentally through his work on weather prediction in 1961. Lorenz was using a simple digital computer, a Royal McBee LGP-30, to run his weather simulation. He wanted to see a sequence of data again, and to save time he started the simulation in the middle of its course. He did this by entering a printout of the data that corresponded to conditions in the middle of the original simulation. To his surprise, the weather the machine began to predict was completely different from the previous calculation. Lorenz tracked this down to the computer printout. The computer worked with 6-digit precision, but the printout rounded variables off to a 3-digit number, so a value like 0.506127 printed as 0.506. This difference is tiny, and the consensus at the time would have been that it should have no practical effect. However, Lorenz discovered that small changes in initial conditions produced large changes in long-term outcome. Lorenz's discovery, which gave its name to Lorenz attractors, showed that even detailed atmospheric modelling cannot, in general, make precise long-term weather predictions.
In 1963, Benoit Mandelbrot found recurring patterns at every scale in data on cotton prices. Beforehand he had studied information theory and concluded noise was patterned like a Cantor set: on any scale the proportion of noise-containing periods to error-free periods was a constant – thus errors were inevitable and must be planned for by incorporating redundancy. Mandelbrot described both the "Noah effect" (in which sudden discontinuous changes can occur) and the "Joseph effect" (in which persistence of a value can occur for a while, yet suddenly change afterwards). This challenged the idea that changes in price were normally distributed. In 1967, he published "How long is the coast of Britain? Statistical self-similarity and fractional dimension", showing that a coastline's length varies with the scale of the measuring instrument, resembles itself at all scales, and is infinite in length for an infinitesimally small measuring device. Arguing that a ball of twine appears as a point when viewed from far away (0-dimensional), a ball when viewed from fairly near (3-dimensional), or a curved strand (1-dimensional), he argued that the dimensions of an object are relative to the observer and may be fractional. An object whose irregularity is constant over different scales ("self-similarity") is a fractal (examples include the Menger sponge, the Sierpiński gasket, and the Koch curve or snowflake, which is infinitely long yet encloses a finite space and has a fractal dimension of circa 1.2619). In 1982, Mandelbrot published The Fractal Geometry of Nature, which became a classic of chaos theory. Biological systems such as the branching of the circulatory and bronchial systems proved to fit a fractal model.
In December 1977, the New York Academy of Sciences organized the first symposium on chaos, attended by David Ruelle, Robert May, James A. Yorke (coiner of the term "chaos" as used in mathematics), Robert Shaw, and the meteorologist Edward Lorenz. The following year Pierre Coullet and Charles Tresser published "Itérations d'endomorphismes et groupe de renormalisation", and Mitchell Feigenbaum's article "Quantitative Universality for a Class of Nonlinear Transformations" finally appeared in a journal, after 3 years of referee rejections. Thus Feigenbaum (1975) and Coullet & Tresser (1978) discovered the universality in chaos, permitting the application of chaos theory to many different phenomena.
In 1979, Albert J. Libchaber, during a symposium organized in Aspen by Pierre Hohenberg, presented his experimental observation of the bifurcation cascade that leads to chaos and turbulence in Rayleigh–Bénard convection systems. He was awarded the Wolf Prize in Physics in 1986 along with Mitchell J. Feigenbaum for their inspiring achievements.
In 1986, the New York Academy of Sciences co-organized with the National Institute of Mental Health and the Office of Naval Research the first important conference on chaos in biology and medicine. There, Bernardo Huberman presented a mathematical model of the eye tracking disorder among schizophrenics. This led to a renewal of physiology in the 1980s through the application of chaos theory, for example, in the study of pathological cardiac cycles.
In 1987, Per Bak, Chao Tang and Kurt Wiesenfeld published a paper in Physical Review Letters describing for the first time self-organized criticality (SOC), considered one of the mechanisms by which complexity arises in nature.
Alongside largely lab-based approaches such as the Bak–Tang–Wiesenfeld sandpile, many other investigations have focused on large-scale natural or social systems that are known (or suspected) to display scale-invariant behavior. Although these approaches were not always welcomed (at least initially) by specialists in the subjects examined, SOC has nevertheless become established as a strong candidate for explaining a number of natural phenomena, including earthquakes, (which, long before SOC was discovered, were known as a source of scale-invariant behavior such as the Gutenberg–Richter law describing the statistical distribution of earthquake sizes, and the Omori law describing the frequency of aftershocks), solar flares, fluctuations in economic systems such as financial markets (references to SOC are common in econophysics), landscape formation, forest fires, landslides, epidemics, and biological evolution (where SOC has been invoked, for example, as the dynamical mechanism behind the theory of "punctuated equilibria" put forward by Niles Eldredge and Stephen Jay Gould). Given the implications of a scale-free distribution of event sizes, some researchers have suggested that another phenomenon that should be considered an example of SOC is the occurrence of wars. These investigations of SOC have included both attempts at modelling (either developing new models or adapting existing ones to the specifics of a given natural system), and extensive data analysis to determine the existence and/or characteristics of natural scaling laws.
In the same year, James Gleick published Chaos: Making a New Science, which became a best-seller and introduced the general principles of chaos theory as well as its history to the broad public. Initially the domain of a few, isolated individuals, chaos theory progressively emerged as a transdisciplinary and institutional discipline, mainly under the name of nonlinear systems analysis. Alluding to Thomas Kuhn's concept of a paradigm shift exposed in The Structure of Scientific Revolutions (1962), many "chaologists" (as some described themselves) claimed that this new theory was an example of such a shift, a thesis upheld by Gleick.
The availability of cheaper, more powerful computers broadens the applicability of chaos theory. Currently, chaos theory remains an active area of research, involving many different disciplines such as mathematics, topology, physics, social systems, population modeling, biology, meteorology, astrophysics, information theory, computational neuroscience, pandemic crisis management, etc.
Applications
Although chaos theory was born from observing weather patterns, it has become applicable to a variety of other situations. Some areas benefiting from chaos theory today are geology, mathematics, biology, computer science, economics, engineering, finance, meteorology, philosophy, anthropology, physics, politics, population dynamics, psychology, and robotics. A few categories are listed below with examples, but this is by no means a comprehensive list as new applications are appearing.
Cryptography
Chaos theory has been used for many years in cryptography. In the past few decades, chaos and nonlinear dynamics have been used in the design of hundreds of cryptographic primitives. These algorithms include image encryption algorithms, hash functions, secure pseudo-random number generators, stream ciphers, watermarking and steganography. The majority of these algorithms are based on uni-modal chaotic maps and a big portion of these algorithms use the control parameters and the initial condition of the chaotic maps as their keys. From a wider perspective, without loss of generality, the similarities between the chaotic maps and the cryptographic systems is the main motivation for the design of chaos based cryptographic algorithms. One type of encryption, secret key or symmetric key, relies on diffusion and confusion, which is modeled well by chaos theory. Another type of computing, DNA computing, when paired with chaos theory, offers a way to encrypt images and other information. Many of the DNA-Chaos cryptographic algorithms are proven to be either not secure, or the technique applied is suggested to be not efficient.
Robotics
Robotics is another area that has recently benefited from chaos theory. Instead of robots acting in a trial-and-error type of refinement to interact with their environment, chaos theory has been used to build a predictive model.
Chaotic dynamics have been exhibited by passive walking biped robots.
Biology
For over a hundred years, biologists have been keeping track of populations of different species with population models. Most models are continuous, but recently scientists have been able to implement chaotic models in certain populations. For example, a study on models of Canadian lynx showed there was chaotic behavior in the population growth. Chaos can also be found in ecological systems, such as hydrology. While a chaotic model for hydrology has its shortcomings, there is still much to learn from looking at the data through the lens of chaos theory. Another biological application is found in cardiotocography. Fetal surveillance is a delicate balance of obtaining accurate information while being as noninvasive as possible. Better models of warning signs of fetal hypoxia can be obtained through chaotic modeling.
Economics
It is possible that economic models can also be improved through an application of chaos theory, but predicting the health of an economic system and what factors influence it most is an extremely complex task. Economic and financial systems are fundamentally different from those in the classical natural sciences since the former are inherently stochastic in nature, as they result from the interactions of people, and thus pure deterministic models are unlikely to provide accurate representations of the data. The empirical literature that tests for chaos in economics and finance presents very mixed results, in part due to confusion between specific tests for chaos and more general tests for non-linear relationships.
Chaos could be found in economics by the means of recurrence quantification analysis. In fact, Orlando et al. by the means of the so-called recurrence quantification correlation index were able detect hidden changes in time series. Then, the same technique was employed to detect transitions from laminar (regular) to turbulent (chaotic) phases as well as differences between macroeconomic variables and highlight hidden features of economic dynamics. Finally, chaos could help in modeling how economy operate as well as in embedding shocks due to external events such as COVID-19. For an updated account on the tools and the results obtained by empirically calibrating and testing deterministic chaotic models (e.g., Kaldor-Kalecki, Goodwin, Harrod), see Orlando et al.
Other areas
In chemistry, predicting gas solubility is essential to manufacturing polymers, but models using particle swarm optimization (PSO) tend to converge to the wrong points. An improved version of PSO has been created by introducing chaos, which keeps the simulations from getting stuck. In celestial mechanics, especially when observing asteroids, applying chaos theory leads to better predictions about when these objects will approach Earth and other planets. Four of the five moons of Pluto rotate chaotically. In quantum physics and electrical engineering, the study of large arrays of Josephson junctions benefitted greatly from chaos theory.<ref>Steven Strogatz, Sync: The Emerging Science of Spontaneous Order, Hyperion, 2003</ref> Closer to home, coal mines have always been dangerous places where frequent natural gas leaks cause many deaths. Until recently, there was no reliable way to predict when they would occur. But these gas leaks have chaotic tendencies that, when properly modeled, can be predicted fairly accurately.
Chaos theory can be applied outside of the natural sciences, but historically nearly all such studies have suffered from lack of reproducibility; poor external validity; and/or inattention to cross-validation, resulting in poor predictive accuracy (if out-of-sample prediction has even been attempted). Glass and Mandell and Selz have found that no EEG study has as yet indicated the presence of strange attractors or other signs of chaotic behavior.
Researchers have continued to apply chaos theory to psychology. For example, in modeling group behavior in which heterogeneous members may behave as if sharing to different degrees what in Wilfred Bion's theory is a basic assumption, researchers have found that the group dynamic is the result of the individual dynamics of the members: each individual reproduces the group dynamics in a different scale, and the chaotic behavior of the group is reflected in each member.
Redington and Reidbord (1992) attempted to demonstrate that the human heart could display chaotic traits. They monitored the changes in between-heartbeat intervals for a single psychotherapy patient as she moved through periods of varying emotional intensity during a therapy session. Results were admittedly inconclusive. Not only were there ambiguities in the various plots the authors produced to purportedly show evidence of chaotic dynamics (spectral analysis, phase trajectory, and autocorrelation plots), but also when they attempted to compute a Lyapunov exponent as more definitive confirmation of chaotic behavior, the authors found they could not reliably do so.
In their 1995 paper, Metcalf and Allen maintained that they uncovered in animal behavior a pattern of period doubling leading to chaos. The authors examined a well-known response called schedule-induced polydipsia, by which an animal deprived of food for certain lengths of time will drink unusual amounts of water when the food is at last presented. The control parameter (r) operating here was the length of the interval between feedings, once resumed. The authors were careful to test a large number of animals and to include many replications, and they designed their experiment so as to rule out the likelihood that changes in response patterns were caused by different starting places for r.
Time series and first delay plots provide the best support for the claims made, showing a fairly clear march from periodicity to irregularity as the feeding times were increased. The various phase trajectory plots and spectral analyses, on the other hand, do not match up well enough with the other graphs or with the overall theory to lead inexorably to a chaotic diagnosis. For example, the phase trajectories do not show a definite progression towards greater and greater complexity (and away from periodicity); the process seems quite muddied. Also, where Metcalf and Allen saw periods of two and six in their spectral plots, there is room for alternative interpretations. All of this ambiguity necessitate some serpentine, post-hoc explanation to show that results fit a chaotic model.
By adapting a model of career counseling to include a chaotic interpretation of the relationship between employees and the job market, Amundson and Bright found that better suggestions can be made to people struggling with career decisions. Modern organizations are increasingly seen as open complex adaptive systems with fundamental natural nonlinear structures, subject to internal and external forces that may contribute chaos. For instance, team building and group development is increasingly being researched as an inherently unpredictable system, as the uncertainty of different individuals meeting for the first time makes the trajectory of the team unknowable.
Some say the chaos metaphor—used in verbal theories—grounded on mathematical models and psychological aspects of human behavior
provides helpful insights to describing the complexity of small work groups, that go beyond the metaphor itself.
Traffic forecasting may benefit from applications of chaos theory. Better predictions of when traffic will occur would allow measures to be taken to disperse it before it would have occurred. Combining chaos theory principles with a few other methods has led to a more accurate short-term prediction model (see the plot of the BML traffic model at right).
Chaos theory has been applied to environmental water cycle data (also hydrological data), such as rainfall and streamflow. These studies have yielded controversial results, because the methods for detecting a chaotic signature are often relatively subjective. Early studies tended to "succeed" in finding chaos, whereas subsequent studies and meta-analyses called those studies into question and provided explanations for why these datasets are not likely to have low-dimension chaotic dynamics.
See also
Examples of chaotic systems
Advected contours
Arnold's cat map
Bifurcation theory
Bouncing ball dynamics
Chua's circuit
Cliodynamics
Coupled map lattice
Double pendulum
Duffing equation
Dynamical billiards
Economic bubble
Gaspard-Rice system
Hénon map
Horseshoe map
List of chaotic maps
Rössler attractor
Standard map
Swinging Atwood's machine
Tilt A Whirl
Other related topics
Amplitude death
Anosov diffeomorphism
Catastrophe theory
Causality
Chaos machine
Chaotic mixing
Chaotic scattering
Control of chaos
Determinism
Edge of chaos
Emergence
Mandelbrot set
Kolmogorov–Arnold–Moser theorem
Ill-conditioning
Ill-posedness
Nonlinear system
Patterns in nature
Predictability
Quantum chaos
Santa Fe Institute
Synchronization of chaos
Unintended consequence
People
Ralph Abraham
Michael Berry
Leon O. Chua
Ivar Ekeland
Doyne Farmer
Martin Gutzwiller
Brosl Hasslacher
Michel Hénon
Aleksandr Lyapunov
Norman Packard
Otto Rössler
David Ruelle
Oleksandr Mikolaiovich Sharkovsky
Robert Shaw
Floris Takens
James A. Yorke
George M. Zaslavsky
References
Further reading
Articles
Online version (Note: the volume and page citation cited for the online text differ from that cited here. The citation here is from a photocopy, which is consistent with other citations found online that don't provide article views. The online content is identical to the hardcopy text. Citation variations are related to country of publication).
Textbooks
and
Semitechnical and popular works
Christophe Letellier, Chaos in Nature, World Scientific Publishing Company, 2012, .
John Briggs and David Peat, Turbulent Mirror: : An Illustrated Guide to Chaos Theory and the Science of Wholeness, Harper Perennial 1990, 224 pp.
John Briggs and David Peat, Seven Life Lessons of Chaos: Spiritual Wisdom from the Science of Change, Harper Perennial 2000, 224 pp.
Predrag Cvitanović, Universality in Chaos, Adam Hilger 1989, 648 pp.
Leon Glass and Michael C. Mackey, From Clocks to Chaos: The Rhythms of Life, Princeton University Press 1988, 272 pp.
James Gleick, Chaos: Making a New Science, New York: Penguin, 1988. 368 pp.
L Douglas Kiel, Euel W Elliott (ed.), Chaos Theory in the Social Sciences: Foundations and Applications, University of Michigan Press, 1997, 360 pp.
Arvind Kumar, Chaos, Fractals and Self-Organisation; New Perspectives on Complexity in Nature , National Book Trust, 2003.
Hans Lauwerier, Fractals, Princeton University Press, 1991.
Edward Lorenz, The Essence of Chaos, University of Washington Press, 1996.
David Peak and Michael Frame, Chaos Under Control: The Art and Science of Complexity, Freeman, 1994.
Heinz-Otto Peitgen and Dietmar Saupe (Eds.), The Science of Fractal Images, Springer 1988, 312 pp.
Clifford A. Pickover, Computers, Pattern, Chaos, and Beauty: Graphics from an Unseen World , St Martins Pr 1991.
Clifford A. Pickover, Chaos in Wonderland: Visual Adventures in a Fractal World, St Martins Pr 1994.
Ilya Prigogine and Isabelle Stengers, Order Out of Chaos, Bantam 1984.
David Ruelle, Chance and Chaos, Princeton University Press 1993.
Ivars Peterson, Newton's Clock: Chaos in the Solar System, Freeman, 1993.
Manfred Schroeder, Fractals, Chaos, and Power Laws, Freeman, 1991.
Ian Stewart, Does God Play Dice?: The Mathematics of Chaos , Blackwell Publishers, 1990.
Steven Strogatz, Sync: The emerging science of spontaneous order, Hyperion, 2003.
Yoshisuke Ueda, The Road To Chaos, Aerial Pr, 1993.
M. Mitchell Waldrop, Complexity : The Emerging Science at the Edge of Order and Chaos, Simon & Schuster, 1992.
Antonio Sawaya, Financial Time Series Analysis : Chaos and Neurodynamics Approach, Lambert, 2012.
External links
Nonlinear Dynamics Research Group with Animations in Flash
The Chaos group at the University of Maryland
The Chaos Hypertextbook. An introductory primer on chaos and fractals
ChaosBook.org An advanced graduate textbook on chaos (no fractals)
Society for Chaos Theory in Psychology & Life Sciences
Nonlinear Dynamics Research Group at CSDC, Florence Italy
Interactive live chaotic pendulum experiment, allows users to interact and sample data from a real working damped driven chaotic pendulum
Nonlinear dynamics: how science comprehends chaos, talk presented by Sunny Auyang, 1998.
Nonlinear Dynamics. Models of bifurcation and chaos by Elmer G. Wiens
Gleick's Chaos (excerpt)
Systems Analysis, Modelling and Prediction Group at the University of Oxford
A page about the Mackey-Glass equation
High Anxieties — The Mathematics of Chaos (2008) BBC documentary directed by David Malone
The chaos theory of evolution – article published in Newscientist featuring similarities of evolution and non-linear systems including fractal nature of life and chaos.
Jos Leys, Étienne Ghys et Aurélien Alvarez, Chaos, A Mathematical Adventure. Nine films about dynamical systems, the butterfly effect and chaos theory, intended for a wide audience.
"Chaos Theory", BBC Radio 4 discussion with Susan Greenfield, David Papineau & Neil Johnson (In Our Time'', May 16, 2002)
Chaos: The Science of the Butterfly Effect (2019) an explanation presented by Derek Muller
Complex systems theory
Computational fields of study |
6321 | https://en.wikipedia.org/wiki/Channel%204 | Channel 4 | Channel 4 is a British free-to-air public-service television network. Its headquarters are in London, with a national headquarters
in Leeds and creative hubs in Glasgow and Bristol.
The channel was established to provide a fourth television service to the United Kingdom in addition to the licence-funded BBC One and BBC Two, and the single commercial broadcasting network ITV.
It began its transmission on 2 November 1982, the day after Welsh language broadcaster S4C's launch. It is publicly-owned and advertising-funded; originally a subsidiary of the Independent Broadcasting Authority (IBA), the station is now owned and operated by Channel Four Television Corporation, a public corporation of the Department for Digital, Culture, Media and Sport, which was established in 1990 and came into operation in 1993. In 2010, Channel 4 extended service into Wales and became a UK-wide television channel.
History
Conception
Before Channel 4 and S4C, Britain had three terrestrial television services: BBC1, BBC2, and ITV. The Broadcasting Act 1980 began the process of adding a fourth; Channel 4 was formally created, along with its Welsh counterpart, by an act of Parliament in 1982. After some months of test broadcasts, it began scheduled transmissions on 2 November 1982 from Scala House, the former site of the Scala Theatre.
The notion of a second commercial broadcaster in the United Kingdom had been around since the inception of ITV in 1954 and its subsequent launch in 1955; the idea of an "ITV2" was long expected and pushed for. Indeed, television sets sold throughout the 1970s and early 1980s had a spare tuning button labelled "ITV/IBA 2". Throughout ITV's history and until Channel 4 finally became a reality, a perennial dialogue existed between the GPO, the government, the ITV companies and other interested parties, concerning the form such an expansion of commercial broadcasting would take. Most likely, politics had the biggest impact in leading to a delay of almost three decades before the second commercial channel became a reality.
One clear benefit of the "late arrival" of the channel was that its frequency allocations at each transmitter had already been arranged in the early 1960s, when the launch of an ITV2 was highly anticipated. This led to very good coverage across most of the country and few problems of interference with other UK-based transmissions; a stark contrast to the problems associated with Channel 5's launch almost 15 years later. "ITV2" is not to be confused with ITV's digital television channel launched in 1998.
Wales
At the time the fourth service was being considered, a movement in Wales lobbied for the creation of dedicated service that would air Welsh language programmes, then only catered for at "off peak" times on BBC Wales and HTV. The campaign was taken so seriously by Gwynfor Evans, former president of Plaid Cymru, that he threatened the government with a hunger strike were it not to honour the plans.
The result was that Channel 4 as seen by the rest of the United Kingdom would be replaced in Wales by Sianel Pedwar Cymru (S4C) ("Channel Four Wales"). Operated by a specially created authority, S4C would air programmes in Welsh made by HTV, the BBC and independent companies. Initially limited frequency space meant that Channel 4 could not be broadcast alongside S4C, though some Channel 4 programmes would be aired at less popular times on the Welsh variant; this practice continued until the closure of S4C's analogue transmissions in 2010, at which time S4C became a fully Welsh channel.
With this conversion of the Wenvoe transmitter group in Wales to digital terrestrial broadcasting on 31 March 2010, Channel 4 became a UK-wide television channel for the first time.
Since then, carriage on digital cable, satellite and digital terrestrial has introduced Channel 4 to Welsh homes where it is now universally available.
Launch and IBA control
The first voice heard on Channel 4's opening day of 2 November 1982 was that of continuity announcer Paul Coia who said: "Good afternoon. It's a pleasure to be able to say to you, welcome to Channel Four." Following the announcement, the channel headed into a montage of clips from its programmes set to the station's signature tune, "Fourscore", written by David Dundas, which would form the basis of the station's jingles for its first decade. The first programme to air on the channel was the teatime game show Countdown, produced by Yorkshire Television, at 16:45. The first person to be seen on Channel 4 was Richard Whiteley, with Ted Moult being the second. The first woman on the channel, contrary to popular belief, was not Whiteley's Countdown co-host Carol Vorderman, but a lexicographer only ever identified as Mary. Whiteley opened the show with the words: "As the countdown to a brand new channel ends, a brand new countdown begins."
On its first day, Channel 4 also broadcast soap opera Brookside, which often ran storylines thought to be controversial; this ran until 2003.
At its launch, Channel 4 committed itself to providing an alternative to the existing channels, an agenda in part set out by its remit which required the provision of programming to minority groups.
In step with its remit, the channel became well received both by minority groups and the arts and cultural worlds during this period, especially under founding chief executive Jeremy Isaacs, where the channel gained a reputation for programmes on the contemporary arts. Channel 4 co-commissioned Robert Ashley's ground-breaking television opera Perfect Lives, which it premiered over several episodes in 1984. The channel often did not receive mass audiences for much of this period, however, as might be expected for a station focusing on minority interest.
During this time Channel 4 also began the funding of independent films, such as the Merchant Ivory docudrama The Courtesans of Bombay.
In 1992, Channel 4 faced its first libel case by Jani Allan, a South African journalist, who objected to her representation in Nick Broomfield's documentary The Leader, His Driver and the Driver's Wife.
In September 1993, the channel broadcast the direct-to-TV documentary film Beyond Citizen Kane, in which it displayed the dominant position of the Rede Globo television network, and discussed its influence, power and political connections in Brazil.
Channel Four Television Corporation
After control of the station passed from the Channel Four Television Company to the Channel Four Television Corporation in 1993, a shift in broadcasting style took place. Instead of aiming for the fringes of society, it began to focus on the edges of the mainstream, and the centre of the mass market itself. It began to show many US programmes in peak viewing time, far more than it had previously done. It gave such shows as Friends and ER their UK premières.
In the early 2000s, Channel 4 began broadcasting reality formats such as Big Brother and obtained the rights to broadcast mass appeal sporting events like cricket and horse racing. This new direction increased ratings and revenues.
In addition, the corporation launched a number of new television channels through its new 4Ventures offshoot, including Film4, At the Races, E4 and More4.
Partially in reaction to its new "populist" direction, the Communications Act 2003 directed the channel to demonstrate innovation, experimentation and creativity, appeal to the tastes and interests of a culturally diverse society, and to include programmes of an educational nature which exhibit a distinctive character.
On 31 December 2004, Channel 4 launched a new look and new idents in which the logo is disguised as different objects and the 4 can be seen in an angle.
Under the leadership of Freeview founder Andy Duncan, 2005 saw a change of direction for Channel 4's digital channels. Channel 4 made E4 free-to-air on digital terrestrial television, and launched a new free-to-air digital channel called More4. By October, Channel 4 had joined the Freeview consortium. By July 2006, Film4 had likewise become free-to-air and restarted broadcasting on digital terrestrial.
Venturing into radio broadcasting, 2005 saw Channel 4 purchase 51 per cent of shares in the now defunct Oneword radio station, with UBC Media holding on to the remaining shares. New programmes such as the weekly, half-hour The Morning Report news programme were among some of the new content Channel 4 provided for the station, with the name 4Radio being used. As of early 2009, however, Channel 4's future involvement in radio remained uncertain.
On 2 November 2007, the station celebrated its 25th birthday. It showed the first episode of Countdown, an anniversary Countdown special, as well as a special edition of The Big Fat Quiz and using the original multicoloured 1982–1996 blocks logo on presentation and idents using the Fourscore jingle throughout the day.
In November 2009, Channel 4 launched a week of 3D television, broadcasting selected programmes each night using stereoscopic ColorCode 3D technology. The accompanying 3D glasses were distributed through Sainsbury's supermarkets.
On 29 September 2015, Channel 4 revamped its presentation for a fifth time; the new branding downplayed the "4" logo from most on-air usage, in favour of using the shapes from the logo in various forms. Four new idents were filmed by Jonathan Glazer, which featured the shapes in various real-world scenes depicting the "discovery" and "origins" of the shapes. The full logo was still occasionally used, but primarily for off-air marketing. Channel 4 also commissioned two new corporate typefaces, "Chadwick", and "Horseferry" (a variation of Chadwick with the aforementioned shapes incorporated into its letter forms), for use across promotional material and on-air.
On 31 October 2017, Channel 4 introduced a new series of idents continuing the theme, this time depicting the logo shapes as having formed an anthropomorphic "giant" character.
Recent history
Before the digital switch-over, Channel 4 raised concerns over how it might finance its public service obligations afterward. In April 2006 it was announced that Channel 4's digital switch-over costs would be paid for by licence fee revenues.
On 28 March 2007, Channel 4 announced plans to launch a music channel "4Music" as a joint venture with British media company EMAP, which would include carriage on the Freeview platform. On 15 August 2008, 4Music was launched across the UK. Channel 4 announced interest in launching a high-definition version of Film4 on Freeview, to coincide with the launch of Channel 4 HD. However, the fourth HD slot was given to Channel 5 instead. Channel 4 has since acquired a 50 per cent stake in EMAP's TV business for a reported £28 million.
Channel 4 was considered for privatisation by the governments of Margaret Thatcher, John Major and Tony Blair. , the future of the channel was again being looked into by the government, with analysts suggesting several options for the channel's future. As of June 2021, the government of Boris Johnson was considering selling the channel.
In June 2017, it was announced that Alex Mahon would be the next chief executive, and would take over from David Abraham, who left in November 2017.
On 25 September 2021, Channel 4 and a number of its sub-channels went off air after an incident at Red Bee Media's playout centre in west London. Channel 4, More4, Film4, E4, 4Music, The Box, Box Hits, Kiss, Magic and Kerrang! were impacted, with the incident still affecting a number of the channels on 30 September 2021. The London Fire Brigade confirmed that a gas fire prevention system at the site had been activated, but firefighters found no sign of fire. Activation of the fire suppression system caused catastrophic damage to some systems, most notably Channel 4's subtitles, signing and audio description system. An emergency back-up subtitling system also failed, leaving Channel 4 unable to provide access services to viewers. This situation was criticised by the National Deaf Children's Society, who complained to the broadcasting watchdog. A new subtitling, signing and audio description system had to be built from scratch. The service eventually started returning at the end of October. In January 2022 Ofcom announced that it would be investigating Channel 4 for failing to meet its subtitling quota on the Freesat satellite service.
On 23 December 2021, Jon Snow presented Channel 4 News for the last time, after 32 years as a main presenter on the programme, making Snow one of the UK's longest-serving presenters on a national news programme.
Public service remit
Channel 4 was established with, and continues to hold, a remit of public service obligations which it must fulfil. The remit changes periodically, as dictated by various broadcasting and communications acts, and is regulated by the various authorities Channel 4 has been answerable to; originally the IBA, then the ITC and now Ofcom.
The preamble of the remit as per the Communications Act 2003 states that:
The remit also involves an obligation to provide programming for schools, and a substantial amount of programming produced outside of Greater London.
Carriage
Channel 4 was carried from its beginning on analogue terrestrial, which was practically the only means of television broadcast in the United Kingdom at the time. It continued to be broadcast through these means until the changeover to digital terrestrial television in the United Kingdom was complete. Since 1998, it has been universally available on digital terrestrial, and the Sky platform (initially encrypted, though encryption was dropped on 14 April 2008 and is now free of charge and available on the Freesat platform) as well as having been available from various times in various areas, on analogue and digital cable networks.
Due to its special status as a public service broadcaster with a specific remit, it is afforded free carriage on the terrestrial platforms, in contrast with other broadcasters such as ITV.
Channel 4 is available outside the United Kingdom; it is widely available in Ireland, the Netherlands, Belgium and Switzerland. The channel is registered to broadcast within the European Union/EEA through the Luxembourg Broadcasting Regulator (ALIA).
Since 2019, it has been offered by British Forces Broadcasting Service (BFBS) to members of the British Armed Forces and their families around the world, BFBS Extra having previously carried a selection of Channel 4 programmes.
The Channel 4 website allows Internet users in the United Kingdom to watch Channel 4 live on the Internet. In the past some programmes (mostly international imports) were not shown. Channel 4 is also provided by Virgin Mobile's DAB mobile TV service, which has the same restrictions as the Internet live stream had. Channel 4 is also carried by the Internet TV service TVCatchup and was previously carried by Zattoo until the operator removed the channel from its platform.
Channel 4 also makes some of its programming available "on demand" via cable and the Internet through All 4.
Funding
During the station's formative years, funding came from the ITV companies in return for their right to sell advertisements in their region on the fourth channel.
Nowadays it pays for itself in much the same way as most privately run commercial stations, i.e. through the sale of on-air advertising, programme sponsorship, and the sale of any programme content and merchandising rights it owns, such as overseas sales and video sales. For example, its total revenues were £925 million with 91 per cent derived from sale of advertising. It also has the ability to subsidise the main network through any profits made on the corporation's other endeavours, which have in the past included subscription fees from stations such as E4 and Film4 (now no longer subscription services) and its "video-on-demand" sales. In practice, however, these other activities are loss-making, and are subsidised by the main network. According to Channel 4's last published accounts, for 2005, the extent of this cross-subsidy was some £30 million.
The change in funding came about under the Broadcasting Act 1990 when the new corporation was afforded the ability to fund itself. Originally this arrangement left a "safety net" guaranteed minimum income should the revenue fall too low, funded by large insurance payments made to the ITV companies. Such a subsidy was never required, however, and these premiums were phased out by the government in 1998. After the link with ITV was cut, the cross-promotion which had existed between ITV and Channel 4 also ended.
In 2007, owing to severe funding difficulties, the channel sought government help and was granted a payment of £14 million over a six-year period. The money was to have come from the television licence fee, and would have been the first time that money from the licence fee had been given to any broadcaster other than the BBC. However, the plan was scrapped by the Secretary of State for Culture, Media and Sport, Andy Burnham, ahead of "broader decisions about the future framework of public service broadcasting". The broadcasting regulator Ofcom released its review in January 2009 in which it suggested that Channel 4 would preferably be funded by "partnerships, joint ventures or mergers".
Programming
Channel 4 is a "publisher-broadcaster", meaning that it commissions or "buys" all of its programming from companies independent of itself. It was the first broadcaster in the United Kingdom to do so on any significant scale; such commissioning is a stipulation which is included in its licence to broadcast. This had the consequence of starting an industry of production companies that did not have to rely on owning an ITV licence to see their programmes air, though since Channel 4, external commissioning has become regular practice on the numerous stations that have launched since, as well as on the BBC and in ITV (where a quota of 25 per cent minimum of total output has been imposed since the Broadcasting Act 1990 came into force). Although it was the first British broadcaster to commission all of its programmes from third parties, Channel 4 was the last terrestrial broadcaster to outsource its transmission and playout operations (to Red Bee Media), after 25 years in-house.
The requirement to obtain all content externally is stipulated in its licence. Additionally, Channel 4 also began a trend of owning the copyright and distribution rights of the programmes it aired, in a manner that is similar to the major Hollywood studios' ownership of television programmes that they did not directly produce. Thus, although Channel 4 does not produce programmes, many are seen as belonging to it.
It was established with a specific intention of providing programming to groups of minority interests, not catered for by its competitors, which at the time were only the BBC and ITV.
Channel 4 also pioneered the concept of 'stranded programming', where seasons of programmes following a common theme would be aired and promoted together. Some would be very specific, and run for a fixed period of time; the 4 Mation season, for example, showed innovative animation. Other, less specific strands, were (and still are) run regularly, such as T4, a strand of programming aimed at teenagers, on weekend mornings (and weekdays during school/college holidays); Friday Night Comedy, a slot where the channel would pioneer its style of comedy commissions, 4Music (now a separate channel) and 4Later, an eclectic collection of offbeat programmes transmitted in the early hours of the morning.
In its earlier years, certain risqué art-house films (dubbed by many of Channel 4's critics as being pornographic) would be screened with a red triangle digital on-screen graphic in the upper right of the screen. Other films were broadcast under the Film on Four banner, before the FilmFour brand was launched in the late 1990s.
Most watched programmes
The following is a list of the 10 most watched shows on Channel 4 since launch, based on Live +28 data supplied by BARB, and archival data published by Channel 4.
Kids segment
Take 5 (Channel 4) (1992–1996)
Comedy
During the station's early days, the screenings of innovative short one-off comedy films produced by a rotating line-up of alternative comedians went under the title of The Comic Strip Presents. The Tube and Saturday Live/Friday Night Live also launched the careers of a number of comedians and writers. Channel 4 broadcast a number of popular American imports, including Roseanne, Friends, Sex and the City, South Park and Will & Grace. Other significant US acquisitions include The Simpsons, for which the station was reported to have paid £700,000 per episode for the terrestrial television rights.
In April 2010, Channel 4 became the first UK broadcaster to adapt the American comedy institution of roasting to British television, with A Comedy Roast.
In 2010, Channel 4 organised Channel 4's Comedy Gala, a comedy benefit show in aid of Great Ormond Street Children's Hospital. With over 25 comedians appearing, it billed it as "the biggest live stand up show in United Kingdom history". Filmed live on 30 March in front of 14,000 at The O2 Arena in London, it was broadcast on 5 April. This has continued to 2016.
In 2021, Channel 4 decided to revive The British Comedy Awards as part of their Stand Up To Cancer programming. The ceremony, billed as The National Comedy Awards was due to be held in the Spring of 2021 but was delayed due to the Coronavirus pandemic until 15 December 2021 and then cancelled a week before it was due to be held, due to concerns over the Omicron variant. The National Comedy Awards was not the only live comedy event that was part of the channel's Christmas schedule that was effected by these concerns as Joe Lycett: Mummy’s Big Christmas Do! was also postponed, with the 22 December show now due to air as a pilot for a new series called Mummy's House Party in Spring 2022.
Factual and current affairs
Channel 4 has a strong reputation for history programmes and real-life documentaries. It has also courted controversy, for example by broadcasting live the first public autopsy in the UK for 170 years, carried out by Gunther von Hagens in 2002, or the 2003 one-off stunt Derren Brown Plays Russian Roulette Live.
Its news service, Channel 4 News, is supplied by ITN whilst its long-standing investigative documentary series, Dispatches, attracts perennial media attention.
FourDocs
FourDocs was an online documentary site provided by Channel 4. It allowed viewers to upload their own documentaries to the site for others to view. It focused on documentaries of between 3 and 5 minutes. The website also included an archive of classic documentaries, interviews with documentary filmmakers and short educational guides to documentary-making. It won a Peabody Award in 2006. The site also included a strand for documentaries of under 59 seconds, called "Microdocs".
Schools programming
Channel 4 is obliged to carry schools programming as part of its remit and licence.
ITV Schools on Channel 4
Since 1957 ITV had produced schools programming, which became an obligation. In 1987, five years after the station was launched, the IBA afforded ITV free carriage of these programmes during Channel 4's then-unused weekday morning hours. This arrangement allowed the ITV companies to fulfil their obligation to provide schools programming, whilst allowing ITV itself to broadcast regular programmes complete with advertisements. During the times in which schools programmes were aired Central Television provided most of the continuity with play-out originating from Birmingham.
Channel 4 Schools/4Learning
After the restructuring of the station in 1993, ITV's obligations to provide such programming on Channel 4's airtime passed to Channel 4 itself, and the new service became Channel 4 Schools, with the new corporation administering the service and commissioning its programmes, some still from ITV, others from independent producers.
In March 2008, the 4Learning interactive new media commission Slabovia.tv was launched. The Slabplayer online media player showing TV shows for teenagers was launched on 26 May 2008.
The schools programming has always had elements which differ from its normal presentational package. In 1993, the Channel 4 Schools idents featured famous people in one category, with light shining on them in front of an industrial-looking setting supplemented by instrumental calming music. This changed in 1996 with the circles look to numerous children touching the screen, forming circles of information then picked up by other children. The last child would produce the Channel 4 logo in the form of three vertical circles, with another in the middle and to the left containing the Channel 4 logo.
A present feature of presentation was a countdown sequence featuring, in 1993 a slide with the programme name, and afterwards an extended sequence matching the channel branding. In 1996, this was an extended ident with timer in top left corner, and in 1999 following the adoption of the squares look, featured a square with timer slowly make its way across the right of the screen with people learning and having fun while doing so passing across the screen. It finished with the Channel 4 logo box on the right of the screen and the name 'Channel 4 Schools' being shown. This was adapted in 2000 when the service's name was changed to '4Learning'.
In 2001, this was altered to various scenes from classrooms around the world and different parts of school life. The countdown now flips over from the top, right, bottom and left with each second, and ends with four coloured squares, three of which are aligned vertically to the left of the Channel 4 logo, which is contained inside the fourth box. The tag 'Learning' is located directly beneath the logo. The final countdown sequence lasted between 2004 and 2005 and featured a background video of current controversial issues, overlaid with upcoming programming information. The video features people in the style of graffiti enacting the overuse of CCTV cameras, fox hunting, computer viruses and pirate videos, relationships, pollution of the seas and violent lifestyles. Following 2005, no branded section has been used for schools programmes.
Religious programmes
From the outset, Channel 4 did not conform to the expectations of conventional religious broadcasting in the UK. John Ranelagh, first Commissioning Editor for Religion, made his priority 'broadening the spectrum of religious programming' and more 'intellectual' concerns. He also ignored the religious programme advisory structure that had been put in place by the BBC, and subsequently adopted by ITV. Ranelagh's first major commission caused a furore, a three-part documentary series called Jesus: The Evidence. The programmes, transmitted during the Easter period of 1984, seemed to advocate the idea that the Gospels were unreliable, Jesus may have indulged in witchcraft, and that he may not have even existed. The series triggered a public outcry, and marked a significant moment in the deterioration in the relationship between the UK's broadcasting and religious institutions.
Film
Numerous genres of film-making – such as comedy, drama, documentary, adventure/action, romance and horror/thriller – are represented in the channel's schedule. From the launch of Channel 4 until 1998, film presentations on C4 would often be broadcast under the "Film on Four" banner.
In March 2005, Channel 4 screened the uncut Lars von Trier film The Idiots, which includes unsimulated sexual intercourse, making it the first UK terrestrial channel to do so. The channel had previously screened other films with similar material but censored and with warnings.
Since 1 November 1998, Channel 4 has had a digital subsidiary channel dedicated to the screening of films. This channel launched as a paid subscription channel under the name "FilmFour", and was relaunched in July 2006 as a free-to-air channel under the current name of "Film4". The Film4 channel carries a wide range of film productions, including acquired and Film4-produced projects. Channel 4's general entertainment channels E4 and More4 also screen feature films at certain points in the schedule as part of their content mix.
Wank Week
A season of television programmes about masturbation, called Wank Week, was to be broadcast in the United Kingdom by Channel 4 in March 2007. The first show was about a Masturbate-a-thon, a public mass masturbation event, organised to raise money for the sexual health charity Marie Stopes International. Another film would have focused on compulsive male masturbators and a third was to feature the sex educator Betty Dodson.
The series came under public attack from senior television figures, and was pulled amid claims of declining editorial standards and controversy over the channel's public service broadcasting credentials.
Global warming
On 8 March 2007, Channel 4 screened a highly controversial documentary, The Great Global Warming Swindle. The programme states that global warming is "a lie" and "the biggest scam of modern times". The programme's accuracy has been disputed on multiple points, and several commentators have criticised it for being one-sided, noting that the mainstream position on global warming is supported by the scientific academies of the major industrialised nations. There were 246 complaints to Ofcom as of 25 April 2007, including allegations that the programme falsified data. The programme has been criticised by scientists and scientific organisations, and various scientists who participated in the documentary claimed their views had been distorted.
Against Nature: An earlier controversial Channel 4 programme made by Martin Durkin which was also critical of the environmental movement and was charged by the Independent Television Commission of the UK for misrepresenting and distorting the views of interviewees by selective editing.
The Greenhouse Conspiracy: An earlier Channel 4 documentary broadcast on 12 August 1990, as part of the Equinox series, in which similar claims were made. Three of the people interviewed (Lindzen, Michaels and Spencer) were also interviewed in The Great Global Warming Swindle.
Ahmadinejad's Christmas speech
In the Alternative Christmas address of 2008, a Channel 4 tradition since 1993 with a different presenter each year, Iranian President Mahmoud Ahmadinejad made a thinly veiled attack on the United States by claiming that Christ would have been against "bullying, ill-tempered and expansionist powers".
The airing courted controversy and was rebuked by several human rights activists, politicians and religious figures, including Peter Tatchell, Louise Ellman, Ron Prosor and Rabbi Aaron Goldstein. A spokeswoman for the Foreign and Commonwealth Office said: "President Ahmadinejad has, during his time in office, made a series of appalling anti-Semitic statements. The British media are rightly free to make their own editorial choices, but this invitation will cause offence and bemusement not just at home but among friendly countries abroad".
However, some defended Channel 4. Stonewall director Ben Summerskill stated: "In spite of his ridiculous and often offensive views, it is an important way of reminding him that there are some countries where free speech is not repressed...If it serves that purpose, then Channel 4 will have done a significant public service". Dorothy Byrne, Channel 4's head of news and current affairs, also defended the station, saying: "As the leader of one of the most powerful states in the Middle East, President Ahmadinejad's views are enormously influential... As we approach a critical time in international relations, we are offering our viewers an insight into an alternative world view...Channel 4 has devoted more airtime to examining Iran than any other broadcaster and this message continues a long tradition of offering a different perspective on the world around us".
4Talent
4Talent is an editorial branch of Channel 4's commissioning wing, which co-ordinates Channel 4's various talent development schemes for film, television, radio, new media and other platforms and provides a showcasing platform for new talent.
There are bases in London, Birmingham, Glasgow and Belfast, serving editorial hubs known respectively as 4Talent National, 4Talent Central England, 4Talent Scotland and 4Talent Northern Ireland. These four sites include features, profiles and interviews in text, audio and video formats, divided into five zones: TV, Film, Radio, New Media and Extras, which covers other arts such as theatre, music and design. 4Talent also collates networking, showcasing and professional development opportunities, and runs workshops, masterclasses, seminars and showcasing events across the UK.
4Talent Magazine
4Talent Magazine is the creative industries magazine from 4Talent, which launched in 2005 as TEN4 magazine under the editorship of Dan Jones. 4Talent Magazine is currently edited by Nick Carson. Other staff include deputy editor Catherine Bray and production editor Helen Byrne. The magazine covers rising and established figures of interest in the creative industries, a remit including film, radio, TV, comedy, music, new media and design.
Subjects are usually UK-based, with contributing editors based in Northern Ireland, Scotland, London and Birmingham, but the publication has been known to source international content from Australia, America, continental Europe and the Middle East. The magazine is frequently organised around a theme for the issue, for instance giving half of November 2007's pages over to profiling winners of the annual 4Talent Awards.
An unusual feature of the magazine's credits is the equal prominence given to the names of writers, photographers, designers and illustrators, contradicting standard industry practice of more prominent writer bylines. It is also recognisable for its 'wraparound' covers, which use the front and back as a continuous canvas – often produced by guest artists.
Although 4Talent Magazine is technically a newsstand title, a significant proportion of its readers are subscribers. It started life as a quarterly 100-page title, but has since doubled in size and is now published bi-annually.
Scheduling
In the 2010s, Channel 4 has become the public service broadcaster most likely to amend their schedule at short notice, if programmes are not getting enough viewers in their intended slots. Programmes which have been heavily promoted by the channel before launch and then have lost their slot a week later include Sixteen: Class of 2021. This was a fly-on-the-wall school documentary which lost its prime 9pm slot after one episode on 31 August 2021, even with a 4 star review in The Guardian. Channel 4 moved the next episode to a late night (post-primetime) slot on a different day and continued to broadcast the remainder of the four-part series in this timeslot. Also in 2021, the channel launched Epic Wales: Valleys, Mountains and Coast, a version of their More4 documentaries The Pennines: Backbone of Britain, The Yorkshire Dales and The Lakes and Devon and Cornwall. set in Wales. Epic Wales: Valleys, Mountains and Coast. was initially broadcast in a prime Friday night slot at 8pm, in the hour before their comedy shows, but was dumped by the channel before the series was completed and replaced by repeats. In February 2022, the channel scheduled a new version of the show under the title Wonderous Wales with a Saturday night slot at 8pm but after one episode, they decided to take this series out of their schedule, moving up a repeat of Matt Baker: Our Farm in the Dales to 8pm and putting an episode of Escape to the Chateau in Baker's slot at 7pm. Other programmes moved out of primetime in 2022, include Mega Mansion Hunters Channel 4's answer to Selling Sunset which saw its third and final episode moved past midnight, with repeats put in the schedule before it.
In addition to these shows, O. T. Fagbenle's sitcom Maxxx was pulled from their youth TV channel E4, after one episode from the series had been broadcast on 2 April 2020, with Channel 4 deciding to keep the series off-air until Black History Month, with the series now going out on the main channel from October 2020.
Presentation
Since its launch in 1982, Channel 4 has used the same logo which consists of a stylised numeral "4" made up of nine differently-shaped blocks. The logo was designed by Martin Lambie-Nairn and his partner Colin Robinson and was the first channel in the UK to depict an ident made using advanced computer generation (the first electronically generated ident was on BBC2 in 1979, but this was two-dimensional). It was designed in conjunction with Bo Gehring Aviation of Los Angeles and originally depicted the "4" in red, yellow, green, blue and purple. The music accompanying the ident was called "Fourscore" and was composed by David Dundas; it was later released as a single alongside a B-side, "Fourscore Two", although neither reached the UK charts. In November 1992, "Fourscore" was replaced by new music.
In 1996, Channel 4 commissioned Tomato Films to revamp the "4", which resulted in the "Circles" idents showing four white circles forming up transparently over various scenes, with the "4" logo depicted in white in one of the circles.
In 1999, Spin redesigned the logo to feature in a single square which sat on the right-hand side of the screen, whilst various stripes would move along from left to right, often lighting the squared "4" up. Like previous "Circles" idents from 1996 (which was made by Tomato Films), the stripes would be interspersed with various scenes potentially related to the upcoming programme.
The logo was made three-dimensional again in 2004 when it was depicted in filmed scenes that show the blocks forming the "4" logo for less than a second before the action moves away again.
In 2015, the logo was disassembled completely to allow the blocks to appear as parts of a nature scene, sometimes featuring a strange dancing creature and sometimes being excavated for scientific study, one being studied under a microscope and showing a tardigrade. The second wave of these idents, launched in 2017, depict a giant creature made of the "4" blocks (made to look almost like a person) interacting with everyday life, sometimes shouting the "Fourscore" theme as a foghorn.
On-air identity
The Lambie-Nairn logo was the first logo of Channel 4, used from its launch on 2 November 1982 until 1996, lasting for fourteen years. The logo was re-introduced for one day only on 22 January 2021, to promote Channel 4's new five-part drama, It's a Sin, which focused on the 1980s AIDS crisis. It was additionally used once on 28 December 2020 as a commemoration for Lambie-Nairn, who had died three days earlier.
Regions/International
Regions
Channel 4 has, since its inception, broadcast identical programmes and continuity throughout the United Kingdom (excluding Wales where it did not operate on analogue transmitters). At launch this made it unique, as both the BBC and ITV had long-established traditions of providing regional variations in their programming in different areas of the country. Since the launch of subsequent British television channels, Channel 4 has become typical in its lack of regional programming variations.
A few exceptions exist to this rule for programming and continuity:
Some of Channel 4's schools' programming (1980s/early 1990s) was regionalised due to differences in curricula between different regions.
Advertising on Channel 4 does contain regular variation: prior to 1993, when ITV was responsible for selling Channel 4's advertising, each regional ITV company would provide the content of advertising breaks, covering the same transmitter area as themselves, and these breaks were often unique to that area. After Channel 4 became responsible for its own advertising, it continued to offer advertisers the ability to target particular audiences and divided its coverage area into six regions: London, South, Midlands, North, Northern Ireland and Scotland. Wales does not have its own advertising region; instead, its viewers receive the southern region on digital platforms intentionally broadcast to the area or the neighbouring region where terrestrial transmissions spill over into Wales. Channel 5 and ITV Breakfast use a similar model to Channel 4 for providing their own advertising regions, despite also having a single national output of programming.
Part of Channel 4's remit covers the commissioning of programmes from outside London. Channel 4 has a dedicated director of nations and regions, Stuart Cosgrove, who is based in a regional office in Glasgow. As his job title suggests, it is his responsibility to foster relations with independent producers based in areas of the United Kingdom (including Wales) outside London.
International
Channel 4 is available in Ireland, with adverts specifically tailored towards the Irish market. The channel is registered with the broadcasting regulators in Luxembourg for terms of conduct and business within the EU/EEA while observing guidelines outlined by Ireland's BAI code. Irish advertising sales are managed by Media Link in Dublin. Where Channel 4 does not hold broadcasting rights within the Republic of Ireland such programming is unavailable. For example, the series Glee was not available on Channel 4 on Sky in Ireland due to it broadcasting on Virgin Media One within Ireland. Currently, programming available on All 4 is available within the Republic of Ireland without restrictions. Elsewhere in Europe the UK version of the channel is available.
Future possibility of regional news
With ITV plc pushing for much looser requirements on the amount of regional news and other programming it is obliged to broadcast in its ITV regions, the idea of Channel 4 taking on a regional news commitment has been considered, with the corporation in talks with Ofcom and ITV over the matter. Channel 4 believe that a scaling-back of such operations on ITV's part would be detrimental to Channel 4's national news operation, which shares much of its resources with ITV through their shared news contractor ITN. At the same time, Channel 4 also believe that such an additional public service commitment would bode well in on-going negotiations with Ofcom in securing additional funding for its other public service commitments.
Channel 4 HD
In mid-2006 Channel 4 ran a six-month closed trial of HDTV, as part of the wider Freeview HD experiment via the Crystal Palace transmitter to London and parts of the home counties, including the use of Lost and Desperate Housewives as part of the experiment, as US broadcasters such as ABC already have an HDTV back catalogue.
On 10 December 2007, Channel 4 launched a high-definition television simulcast of Channel 4 on Sky's digital satellite platform, after Sky agreed to contribute toward the channel's satellite distribution costs. It was the first full-time high-definition channel from a terrestrial UK broadcaster.
On 31 July 2009, Virgin Media added Channel 4 HD on channel 146 (later on channel 142, now on channel 141) as part of the M pack. On 25 March 2010 Channel 4 HD appeared on Freeview channel 52 with a placeholding caption, ahead of a commercial launch on 30 March 2010, coinciding with the commercial launch of Freeview HD. On 19 April 2011, Channel 4 HD was added to Freesat on channel 126. As a consequence, the channel moved from being free-to-view to free-to-air on satellite during March 2011. With the closure of S4C Clirlun in Wales on 1 December 2012, on Freeview, Channel 4 HD launched in Wales on 2 December 2012.
The channel carries the same schedule as Channel 4, broadcasting programmes in HD when available, acting as a simulcast. Therefore, SD programming is broadcast upscaled to HD. The first true HD programme to be shown was the 1996 Adam Sandler film Happy Gilmore. From launch until 2016 the presence of the 4HD logo on screen denoted true HD content.
On 1 July 2014, Channel 4 +1 HD, an HD simulcast of Channel 4 +1, launched on Freeview channel 110. It closed on 22 June 2020 to help make room on COM7 following the closure of COM8 on Freeview. On 22 June 2020 Channel4+1 HD and 4Seven HD were removed from Freeview.
On 20 February 2018, Channel 4 announced that Channel 4 HD and All 4 would no longer be supplied on Freesat from 22 February 2018. Channel 4 HD returned to the platform on 8 December 2021, along with the music channel portfolio of The Box Plus Network.
All 4
All 4 is a video on demand service from Channel 4, launched in November 2006 as 4oD. The service offers a variety of programmes recently shown on Channel 4, E4, More4 or from their archives, though some programmes and movies are not available due to rights issues.
Teletext services
4-Tel/FourText
Channel 4 originally licensed an ancillary teletext service to provide schedules, programme information and features. The original service was called 4-Tel, and was produced by Intelfax, a company set up especially for the purpose. It was carried in the 400s on Oracle. In 1993, with Oracle losing its franchise to Teletext Ltd, 4-Tel found a new home in the 300s, and had its name shown in the header row. Intelfax continued to produce the service and in 2002 it was renamed FourText.
Teletext on 4
In 2003, Channel 4 awarded Teletext Ltd a ten-year contract to run the channel's ancillary teletext service, named Teletext on 4. The service closed in 2008, and Teletext is no longer available on Channel 4, ITV and Channel 5.
Awards and nominations
See also
Annan Committee
Big 4
Channel 4 Banned season
Channel 4 Sheffield Pitch competition
List of Channel 4 television programmes
List of television stations in the United Kingdom
Renowned Films
3 Minute Wonder
References
External links
Peabody Award winners
1982 establishments in the United Kingdom
Television channels and stations established in 1982
Television channels in the United Kingdom
Publicly funded broadcasters
International Emmy Founders Award winners |
6834 | https://en.wikipedia.org/wiki/List%20of%20computer%20scientists | List of computer scientists | This is a list of computer scientists, people who do work in computer science, in particular researchers and authors.
Some persons notable as programmers are included here because they work in research as well as program. A few of these people pre-date the invention of the digital computer; they are now regarded as computer scientists because their work can be seen as leading to the invention of the computer. Others are mathematicians whose work falls within what would now be called theoretical computer science, such as complexity theory and algorithmic information theory.
A
Wil van der Aalst – business process management, process mining, Petri nets
Scott Aaronson – quantum computing and complexity theory
Rediet Abebe – algorithms, artificial intelligence
Hal Abelson – intersection of computing and teaching
Serge Abiteboul – database theory
Samson Abramsky – game semantics
Leonard Adleman – RSA, DNA computing
Manindra Agrawal – polynomial-time primality testing
Luis von Ahn – human-based computation
Alfred Aho – compilers book, the 'a' in AWK
Frances E. Allen – compiler optimization
Gene Amdahl – supercomputer developer, Amdahl Corporation founder
David P. Anderson – volunteer computing
Lisa Anthony – natural user interfaces
Andrew Appel – compiler of text books
Cecilia R. Aragon – invented treap, human-centered data science
Bruce Arden – programming language compilers (GAT, Michigan Algorithm Decoder (MAD)), virtual memory architecture, Michigan Terminal System (MTS)
Sanjeev Arora – PCP theorem
Winifred "Tim" Alice Asprey – established the computer science curriculum at Vassar College
John Vincent Atanasoff – computer pioneer, creator of Atanasoff Berry Computer (ABC)
B
Charles Babbage (1791–1871) – invented first mechanical computer called the supreme mathematician
Charles Bachman – American computer scientist, known for Integrated Data Store
Roland Carl Backhouse – mathematics of computer program construction, algorithmic problem solving, ALGOL
John Backus – FORTRAN, Backus–Naur form, first complete compiler
David F. Bacon – programming languages, garbage collection
David A. Bader
Victor Bahl
Anthony James Barr – SAS System
Jean Bartik (1924–2011) – one of the first computer programmers, on ENIAC (1946), one of the first Vacuum tube computers, back when "programming" involved using cables, dials, and switches to physically rewire the machine; worked with John Mauchly toward BINAC (1949), EDVAC (1949), UNIVAC (1951) to develop early "stored program" computers
Andrew Barto
Friedrich L. Bauer – Stack (data structure), Sequential Formula Translation, ALGOL, software engineering, Bauer–Fike theorem
Rudolf Bayer – B-tree
Gordon Bell (born 1934) – computer designer DEC VAX, author: Computer Structures
Steven M. Bellovin – network security
Cecilia Berdichevsky (1925–2010) pioneering Argentinian computer scientist
Tim Berners-Lee – World Wide Web
Daniel J. Bernstein – qmail, software as protected speech
Peter Bernus
Abhay Bhushan
Dines Bjørner – Vienna Development Method (VDM), RAISE
Gerrit Blaauw – one of the principal designers of the IBM System 360 line of computers
Sue Black
David Blei
Dorothy Blum – National Security Agency
Lenore Blum – complexity
Manuel Blum – cryptography
Barry Boehm – software engineering economics, spiral development
Corrado Böhm – author of the structured program theorem
Kurt Bollacker
Jeff Bonwick – invented slab allocation and ZFS
Grady Booch – Unified Modeling Language, Object Management Group
George Boole – Boolean logic
Andrew Booth – developed the first rotating drum storage device
Kathleen Booth – developed the first assembly language
Anita Borg (1949–2003) – American computer scientist, founder of Anita Borg Institute for Women and Technology
Bert Bos – Cascading Style Sheets
Mikhail Botvinnik – World Chess Champion, computer scientist and electrical engineer, pioneered early expert system AI and computer chess
Jonathan Bowen – Z notation, formal methods
Stephen R. Bourne – Bourne shell, portable ALGOL 68C compiler
Harry Bouwman (born 1953) – Dutch Information systems researcher, and Professor at the Åbo Akademi University
Robert S. Boyer – string searching, ACL2 theorem prover
Karlheinz Brandenburg – Main mp3 contributor
Lawrence M. Breed – implementation of Iverson Notation (APL), co-developed APL\360, Scientific Time Sharing Corporation cofounder
Jack E. Bresenham – early computer-graphics contributions, including Bresenham's algorithm
Sergey Brin – co-founder of Google
David J. Brown – unified memory architecture, binary compatibility
Per Brinch Hansen (surname "Brinch Hansen") – RC 4000 multiprogramming system, operating system kernels, microkernels, monitors, concurrent programming, Concurrent Pascal, distributed computing & processes, parallel computing
Sjaak Brinkkemper – methodology of product software development
Fred Brooks – System 360, OS/360, The Mythical Man-Month, No Silver Bullet
Rod Brooks
Margaret Burnett – visual programming languages, end-user software engineering, and gender-inclusive software
Michael Butler – Event-B
C
Tracy Camp – wireless computing
Martin Campbell-Kelly – history of computing
Rosemary Candlin
Bryan Cantrill – invented DTrace
Luca Cardelli –
John Carmack – codeveloped Doom
Edwin Catmull – computer graphics
Vinton Cerf – Internet, TCP/IP
Gregory Chaitin
Robert Cailliau – Belgian computer scientist
Zhou Chaochen – duration calculus
Peter Chen – entity-relationship model, data modeling, conceptual model
Leonardo Chiariglione – founder of MPEG
Tracy Chou – computer scientist and activist
Alonzo Church – mathematics of combinators, lambda calculus
Alberto Ciaramella – speech recognition, patent informatics
Edmund M. Clarke – model checking
John Cocke – RISC
Edgar F. Codd (1923–2003) – formulated the database relational model
Jacques Cohen – computer science professor
Ian Coldwater – computer security
Simon Colton – computational creativity
Alain Colmerauer – Prolog
Douglas Comer – Xinu
Paul Justin Compton – Ripple Down Rules
Gordon Cormack – co-invented dynamic Markov compression
Stephen Cook – NP-completeness
James Cooley – Fast Fourier transform (FFT)
Danese Cooper – open-source software
Fernando J. Corbató – Compatible Time-Sharing System (CTSS), Multics
Kit Cosper – open-source software
Patrick Cousot – abstract interpretation
Ingemar Cox – digital watermarking
Seymour Cray – Cray Research, supercomputer
Nello Cristianini – machine learning, pattern analysis, artificial intelligence
Jon Crowcroft – networking
W. Bruce Croft
Glen Culler – interactive computing, computer graphics, high performance computing
Haskell Curry
D
Luigi Dadda – designer of the Dadda multiplier
Ole-Johan Dahl – Simula, object-oriented programming
Ryan Dahl – founder of node.js project
Andries van Dam – computer graphics, hypertext
Samir Das – Wireless Networks, Mobile Computing, Vehicular ad hoc network, Sensor Networks, Mesh networking, Wireless ad hoc network
Neil Daswani – computer security, co-founder and co-director of Stanford Advanced Computer Security Program, co-founder of Dasient (acquired by Twitter), former chief information security of LifeLock and Symantec's Consumer Business Unit
Christopher J. Date – proponent of database relational model
Jeff Dean – Bigtable, MapReduce, Spanner of Google
Erik Demaine – computational origami
Tom DeMarco
Richard DeMillo – computer security, software engineering, educational technology
Dorothy E. Denning – computer security
Peter J. Denning – identified the use of an operating system's working set and balance set, President of ACM
Michael Dertouzos – Director of Massachusetts Institute of Technology (MIT) Laboratory for Computer Science (LCS) from 1974 to 2001
Alexander Dewdney
Robert Dewar – IFIP WG 2.1 member, ALGOL 68, chairperson; AdaCore cofounder, president, CEO
Vinod Dham – P5 Pentium processor
Jan Dietz (born 1945) (decay constant) – information systems theory and Design & Engineering Methodology for Organizations
Whitfield Diffie (born 1944) (linear response function) – public key cryptography, Diffie–Hellman key exchange
Edsger Dijkstra – algorithms, Dijkstra's algorithm, Go To Statement Considered Harmful, semaphore (programming), IFIP WG 2.1 member
Matthew Dillon – DragonFly BSD with LWKT, vkernel OS-level virtualisation, file systems: HAMMER1, HAMMER2
Alan Dix – wrote important university level textbook on human–computer interaction
Jack Dongarra – linear algebra high performance computing (HCI)
Marco Dorigo – ant colony optimization
Paul Dourish – human computer interaction
Charles Stark Draper (1901–1987) – designer of Apollo Guidance Computer, "father of inertial navigation", MIT professor
Susan Dumais – information retrieval
Adam Dunkels – Contiki, lwIP, uIP, protothreads
Jon Michael Dunn – founding dean of Indiana University School of Informatics, information based logics especially relevance logic
Schahram Dustdar – Distributed Systems, TU Wien, Austria
E
Peter Eades – graph drawing
Annie J. Easley
Wim Ebbinkhuijsen – COBOL
John Presper Eckert – ENIAC
Alan Edelman – Edelman's Law, stochastic operator, Interactive Supercomputing, Julia (programming language) cocreator, high performance computing, numerical computing
Brendan Eich – JavaScript, Mozilla
Philip Emeagwali – supercomputing
E. Allen Emerson – model checking
Douglas Engelbart – tiled windows, hypertext, computer mouse
Barbara Engelhardt - latent variable models, genomics, quantitative trait locus (QTL)
David Eppstein
Andrey Ershov – languages ALPHA, Rapira; first Soviet time-sharing system AIST-0, electronic publishing system RUBIN, multiprocessing workstation MRAMOR, IFIP WG 2.1 member, Aesthetics and the Human Factor in Programming
Don Estridge (1937–1985) – led development of original IBM Personal Computer (PC); known as "father of the IBM PC"
Oren Etzioni – MetaCrawler, Netbot
Christopher Riche Evans
David C. Evans – computer graphics
Shimon Even
F
Scott Fahlman
Edward Feigenbaum – intelligence
Edward Felten – computer security
Tim Finin
Raphael Finkel
Donald Firesmith
Gary William Flake
Tommy Flowers – Colossus computer
Robert Floyd – NP-completeness
Sally Floyd – Internet congestion control
Lawrence J. Fogel – evolutionary programming
James D. Foley
Ken Forbus
L. R. Ford, Jr.
Lance Fortnow
Martin Fowler
Robert France
Herbert W. Franke
Edward Fredkin
Yoav Freund
Daniel P. Friedman
Charlotte Froese Fischer – computational theoretical physics
Ping Fu
Xiaoming Fu
Kunihiko Fukushima – neocognitron, artificial neural networks, convolutional neural network architecture, unsupervised learning, deep learning
D. R. Fulkerson
G
Richard P. Gabriel – Maclisp, Common Lisp, Worse is Better, League for Programming Freedom, Lucid Inc., XEmacs
Zvi Galil
Bernard Galler – MAD (programming language)
Hector Garcia-Molina
Michael Garey – NP-completeness
Hugo de Garis
Bill Gates – cofounder of Microsoft
David Gelernter
Lisa Gelobter – was the Chief Digital Service Officer for the U.S. Department of Education, founder of teQuitable
Charles Geschke
Zoubin Ghahramani
Sanjay Ghemawat
Jeremy Gibbons – generic programming, functional programming, formal methods, computational biology, bioinformatics
Juan E. Gilbert – human-centered computing
Lee Giles – CiteSeer
Seymour Ginsburg – formal languages, automata theory, AFL theory, database theory
Robert L. Glass
Kurt Gödel – computability; not a computer scientist per se, but his work was invaluable in the field
Ashok Goel
Joseph Goguen
Hardik Gohel
E. Mark Gold – Language identification in the limit
Adele Goldberg – Smalltalk
Andrew V. Goldberg – algorithms, algorithm engineering
Ian Goldberg – cryptographer, off-the-record messaging
Oded Goldreich – cryptography, computational complexity theory
Shafi Goldwasser – cryptography, computational complexity theory
Gene Golub – Matrix computation
Martin Charles Golumbic – algorithmic graph theory
Gastón Gonnet – cofounder of Waterloo Maple Inc.
Ian Goodfellow – machine learning
James Gosling – Network extensible Window System (NeWS), Java
Paul Graham – Viaweb, On Lisp, Arc
Robert M. Graham – programming language compilers (GAT, Michigan Algorithm Decoder (MAD)), virtual memory architecture, Multics
Susan L. Graham – compilers, programming environments
Jim Gray – database
Sheila Greibach – Greibach normal form, Abstract family of languages (AFL) theory
Ralph Griswold – SNOBOL
Bill Gropp – Message Passing Interface, Portable, Extensible Toolkit for Scientific Computation (PETSc)
Tom Gruber – ontology engineering
Shelia Guberman – handwriting recognition
Ramanathan V. Guha – Resource Description Framework (RDF), Netscape, RSS, Epinions
Neil J. Gunther – computer performance analysis, capacity planning
Jürg Gutknecht – with Niklaus Wirth: Lilith computer; Modula-2, Oberon, Zonnon programming languages; Oberon operating system
Michael Guy – Phoenix, work on number theory, computer algebra, higher dimension polyhedra theory; with John Horton Conway
H
Nico Habermann – work on operating systems, software engineering, inter-process communication, process synchronization, deadlock avoidance, software verification, programming languages: ALGOL 60, BLISS, Pascal, Ada
Philipp Matthäus Hahn – mechanical calculator
Eldon C. Hall – Apollo Guidance Computer
Wendy Hall
Joseph Halpern
Margaret Hamilton – ultra-reliable software design
Richard Hamming – Hamming code, founder of the Association for Computing Machinery
Jiawei Han – data mining
Frank Harary – graph theory
Juris Hartmanis – computational complexity theory
Johan Håstad – computational complexity theory
Les Hatton – software failure and vulnerabilities
Igor Hawryszkiewycz, (born 1948) – American computer scientist and organizational theorist
He Jifeng – provably correct systems
Eric Hehner – predicative programming, formal methods, quote notation, ALGOL
Martin Hellman – encryption
Gernot Heiser – operating system teaching, research, commercialising, Open Kernel Labs, OKL4, Wombat
James Hendler – Semantic Web
John L. Hennessy – computer architecture
Andrew Herbert
Carl Hewitt
Kelsey Hightower – open source, cloud computing
Danny Hillis – Connection Machine
Geoffrey Hinton
Julia Hirschberg
Tin Kam Ho – artificial intelligence, machine learning
C. A. R. Hoare – logic, rigor, communicating sequential processes (CSP)
Louis Hodes (1934–2008) – Lisp, pattern recognition, logic programming, cancer research
Betty Holberton – ENIAC programmer, developed the first Sort Merge Generator
John Henry Holland – genetic algorithms
Herman Hollerith (1860–1929) – invented recording of data on a machine readable medium, using punched cards
Gerard Holzmann – software verification, logic model checking (SPIN)
John Hopcroft – compilers
Admiral Grace Hopper (1906–1992) – developed early compilers: FLOW-Matic, COBOL; worked on UNIVAC; gave speeches on computer history, where she gave out nano-seconds
Eric Horvitz – artificial intelligence
Alston Householder
Paul Hudak (1952–2015) – Haskell language design
David A. Huffman (1925–1999) – Huffman coding, used in data compression
John Hughes – structuring computations with arrows; QuickCheck randomized program testing framework; Haskell language design
Roger Hui – co-created J language
Watts Humphrey (1927–2010) – Personal Software Process (PSP), Software quality, Team Software Process (TSP)
I
Jean Ichbiah – Ada
Roberto Ierusalimschy – Lua (programming language)
Dan Ingalls – Smalltalk, BitBlt, Lively Kernel
Mary Jane Irwin
Kenneth E. Iverson – APL, J
J
Ivar Jacobson – Unified Modeling Language, Object Management Group
Anil K. Jain (born 1948)
Ramesh Jain
Jonathan James
David S. Johnson
Stephen C. Johnson
Cliff Jones – Vienna Development Method (VDM)
Michael I. Jordan
Mathai Joseph
Aravind K. Joshi
Bill Joy (born 1954) – Sun Microsystems, BSD UNIX, vi, csh
Dan Jurafsky – natural language processing
K
William Kahan – numerical analysis
Robert E. Kahn – TCP/IP
Avinash Kak – digital image processing
Poul-Henning Kamp – invented GBDE, FreeBSD Jails, Varnish cache
David Karger
Richard Karp – NP-completeness
Narendra Karmarkar – Karmarkar's algorithm
Marek Karpinski – NP optimization problems
Ted Kaehler – Smalltalk, Squeak, HyperCard
Alan Kay – Dynabook, Smalltalk, overlapping windows
Neeraj Kayal – AKS primality test
Manolis Kellis - computational biology
John George Kemeny – BASIC
Ken Kennedy – compiling for parallel and vector machines
Brian Kernighan (born 1942) – Unix, the 'k' in AWK
Carl Kesselman – grid computing
Gregor Kiczales – CLOS, reflection, aspect-oriented programming
Peter T. Kirstein – Internet
Stephen Cole Kleene – Kleene closure, recursion theory
Dan Klein – Natural language processing, Machine translation
Leonard Kleinrock – ARPANET, queueing theory, packet switching, hierarchical routing
Donald Knuth – The Art of Computer Programming, MIX/MMIX, TeX, literate programming
Andrew Koenig – C++
Daphne Koller – Artificial intelligence, bayesian network
Michael Kölling – BlueJ
Andrey Nikolaevich Kolmogorov – algorithmic complexity theory
Janet L. Kolodner – case-based reasoning
David Korn – KornShell
Kees Koster – ALGOL 68
Robert Kowalski – logic programming
John Koza – genetic programming
John Krogstie – SEQUAL framework
Joseph Kruskal – Kruskal's algorithm
Thomas E. Kurtz (born 1928) – BASIC programming language; Dartmouth College computer professor
L
Richard E. Ladner
Monica S. Lam
Leslie Lamport – algorithms for distributed computing, LaTeX
Butler Lampson – SDS 940, founding member Xerox PARC, Xerox Alto, Turing Award
Peter Landin – ISWIM, J operator, SECD machine, off-side rule, syntactic sugar, ALGOL, IFIP WG 2.1 member, advanced lambda calculus to model programming languages (aided functional programming), denotational semantics
Tom Lane – Independent JPEG Group, PostgreSQL, Portable Network Graphics (PNG)
Börje Langefors
Chris Lattner – creator of Swift (programming language) and LLVM compiler infrastructure
Steve Lawrence
Edward D. Lazowska
Joshua Lederberg
Manny M Lehman
Charles E. Leiserson – cache-oblivious algorithms, provably good work-stealing, coauthor of Introduction to Algorithms
Douglas Lenat – artificial intelligence, Cyc
Yann LeCun
Rasmus Lerdorf – PHP
Max Levchin – Gausebeck–Levchin test and PayPal
Leonid Levin – computational complexity theory
Kevin Leyton-Brown – artificial intelligence
J.C.R. Licklider
David Liddle
Jochen Liedtke – microkernel operating systems Eumel, L3, L4
John Lions – Lions' Commentary on UNIX 6th Edition, with Source Code (Lions Book)
Charles H. Lindsey – IFIP WG 2.1 member, Revised Report on ALGOL 68
Richard J. Lipton – computational complexity theory
Barbara Liskov – programming languages
Yanhong Annie Liu – programming languages, algorithms, program design, program optimization, software systems, optimizing, analysis, and transformations, intelligent systems, distributed computing, computer security, IFIP WG 2.1 member
Darrell Long – computer data storage, computer security
Patricia D. Lopez – broadening participation in computing
Gillian Lovegrove
Ada Lovelace – first programmer
David Luckham – Lisp, Automated theorem proving, Stanford Pascal Verifier, Complex event processing, Rational Software cofounder (Ada compiler)
Eugene Luks
Nancy Lynch
M
Nadia Magnenat Thalmann – computer graphics, virtual actor
Tom Maibaum
Zohar Manna – fuzzy logic
James Martin – information engineering
Robert C. Martin (Uncle Bob) – software craftsmanship
John Mashey
Yuri Matiyasevich – solving Hilbert's tenth problem
Yukihiro Matsumoto – Ruby (programming language)
John Mauchly (1907–1980) – designed ENIAC, first general-purpose electronic digital computer, as well as EDVAC, BINAC and UNIVAC I, the first commercial computer; worked with Jean Bartik on ENIAC and Grace Murray Hopper on UNIVAC
Ujjwal Maulik (1965–) Multi-objective Clustering and Bioinformatics
Derek McAuley – ubiquitous computing, computer architecture, networking
John McCarthy – Lisp (programming language), ALGOL, IFIP WG 2.1 member, artificial intelligence
Andrew McCallum
Douglas McIlroy – macros, pipes, Unix philosophy
Chris McKinstry – artificial intelligence, Mindpixel
Marshall Kirk McKusick – BSD, Berkeley Fast File System
Lambert Meertens – ALGOL 68, IFIP WG 2.1 member, ABC (programming language)
Kurt Mehlhorn – algorithms, data structures, LEDA
Bertrand Meyer – Eiffel (programming language)
Silvio Micali – cryptography
Robin Milner – ML (programming language)
Jack Minker – database logic
Marvin Minsky – artificial intelligence, perceptrons, Society of Mind
James G. Mitchell – WATFOR compiler, Mesa (programming language), Spring (operating system), ARM architecture
Tom M. Mitchell
Arvind Mithal – formal verification of large digital systems, developing dynamic dataflow architectures, parallel computing programming languages (Id, pH), compiling on parallel machines
Paul Mockapetris – Domain Name System (DNS)
Cleve Moler – numerical analysis, MATLAB
Faron Moller – concurrency theory
John P. Moon – inventor, Apple Inc.
Charles H. Moore – Forth language
Edward F. Moore – Moore machine
Gordon Moore – Moore's law
J Strother Moore – string searching, ACL2 theorem prover
Roger Moore – co-developed APL\360, created IPSANET, co-founded I. P. Sharp Associates
Hans Moravec – robotics
Carroll Morgan – formal methods
Robert Tappan Morris – Morris worm
Joel Moses – Macsyma
Rajeev Motwani – randomized algorithm
Oleg A. Mukhanov – quantum computing developer, co-founder and CTO of SeeQC
Stephen Muggleton – Inductive Logic Programming
Klaus-Robert Müller – machine learning, artificial intelligence
Alan Mycroft – programming languages
Musharaf M.M.Hussain – Parallel Computing and Multicore Supper Scaler Processor
N
Mihai Nadin – anticipation research
Makoto Nagao – machine translation, natural language processing, digital library
Frieder Nake – pioneered computer arts
Bonnie Nardi – human–computer interaction
Peter Naur (1928–2016) – Backus–Naur form (BNF), ALGOL 60, IFIP WG 2.1 member
Roger Needham – computer security
James G. Nell – Generalised Enterprise Reference Architecture and Methodology (GERAM)
Greg Nelson (1953–2015) – satisfiability modulo theories, extended static checking, program verification, Modula-3 committee, Simplify theorem prover in ESC/Java
Bernard de Neumann – massively parallel autonomous cellular processor, software engineering research
Klara Dan von Neumann (1911–1963) – early computers, ENIAC programmer and control designer
John von Neumann (1903–1957) – early computers, von Neumann machine, set theory, functional analysis, mathematics pioneer, linear programming, quantum mechanics
Allen Newell – artificial intelligence, Computer Structures
Max Newman – Colossus computer, MADM
Andrew Ng – artificial intelligence, machine learning, robotics
Nils John Nilsson (1933–2019) – artificial intelligence
G.M. Nijssen – Nijssen's Information Analysis Methodology (NIAM) object-role modeling
Tobias Nipkow – proof assistance
Maurice Nivat – theoretical computer science, Theoretical Computer Science journal, ALGOL, IFIP WG 2.1 member
Phiwa Nkambule – Fintech, artificial intelligence, machine learning, robotics
Jerre Noe – computerized banking
Peter Nordin – artificial intelligence, genetic programming, evolutionary robotics
Donald Norman – user interfaces, usability
Peter Norvig – artificial intelligence, Director of Research at Google
George Novacky – University of Pittsburgh: assistant department chair, senior lecturer in computer science, assistant dean of CAS for undergraduate studies
Kristen Nygaard – Simula, object-oriented programming
O
Martin Odersky – Scala programming language
Peter O'Hearn – separation logic, bunched logic, Infer Static Analyzer
T. William Olle – Ferranti Mercury
Steve Omohundro
Severo Ornstein
John O'Sullivan – Wi-Fi
John Ousterhout – Tcl programming language
Mark Overmars – video game programming
P
Larry Page – co-founder of Google
Sankar Pal
Paritosh Pandya
Christos Papadimitriou
David Park (1935–1990) – first Lisp implementation, expert in fairness, program schemas, bisimulation in concurrent computing
David Parnas – information hiding, modular programming
DJ Patil – former Chief Data Scientist of United States
Yale Patt – Instruction-level parallelism, speculative architectures
David A. Patterson – reduced instruction set computer (RISC), RISC-V, redundant arrays of inexpensive disks (RAID), Berkeley Network of Workstations (NOW)
Mike Paterson – algorithms, analysis of algorithms (complexity)
Mihai Pătraşcu – data structures
Lawrence Paulson – ML
Randy Pausch (1960–2008) – human–computer interaction, Carnegie professor, "Last Lecture"
Juan Pavón – software agents
Judea Pearl – artificial intelligence, search algorithms
David Pearson – CADES, computer graphics
Alan Perlis – Programming Pearls
Radia Perlman – spanning tree protocol
Pier Giorgio Perotto – computer designer at Olivetti, designer of the Programma 101 programmable calculator
Rózsa Péter – recursive function theory
Simon Peyton Jones – functional programming
Kathy Pham – data, artificial intelligence, civic technology, healthcare, ethics
Roberto Pieraccini – speech technologist, engineering director at Google
Gordon Plotkin
Amir Pnueli – temporal logic
Willem van der Poel – computer graphics, robotics, geographic information systems, imaging, multimedia, virtual environments, games
Cicely Popplewell (1920–1995) – British software engineer in 1960s
Emil Post – mathematics
Jon Postel – Internet
Franco Preparata – computer engineering, computational geometry, parallel algorithms, computational biology
William H. Press – numerical algorithms
R
Rapelang Rabana
Grzegorz Rozenberg – natural computing, automata theory, graph transformations and concurrent systems
Michael O. Rabin – nondeterministic machine
Dragomir R. Radev – natural language processing, information retrieval
T. V. Raman – accessibility, Emacspeak
Brian Randell – ALGOL 60, software fault tolerance, dependability, pre-1950 history of computing hardware
Anders P. Ravn – Duration Calculus
Raj Reddy – artificial intelligence
David P. Reed
Trygve Reenskaug – model–view–controller (MVC) software architecture pattern
John C. Reynolds – continuations, definitional interpreters, defunctionalization, Forsythe, Gedanken language, intersection types, polymorphic lambda calculus, relational parametricity, separation logic, ALGOL
Joyce K. Reynolds – Internet
Reinder van de Riet – Editor: Europe of Data and Knowledge Engineering, COLOR-X event modeling language
Bernard Richards – medical informatics
Martin Richards – BCPL
Adam Riese
C. J. van Rijsbergen
Dennis Ritchie – C (programming language), Unix
Ron Rivest – RSA, MD5, RC4
Ken Robinson – formal methods
Colette Rolland – REMORA methodology, meta modelling
John Romero – codeveloped Doom
Azriel Rosenfeld
Douglas T. Ross – Automatically Programmed Tools (APT), Computer-aided design, structured analysis and design technique, ALGOL X
Guido van Rossum – Python (programming language)
Winston W. Royce – waterfall model
Rudy Rucker – mathematician, writer, educator
Steven Rudich – complexity theory, cryptography
Jeff Rulifson
James Rumbaugh – Unified Modeling Language, Object Management Group
Peter Ružička – Slovak computer scientist and mathematician
S
George Sadowsky
Umar Saif
Gerard Salton – information retrieval
Jean E. Sammet – programming languages
Claude Sammut – artificial intelligence researcher
Carl Sassenrath – operating systems, programming languages, Amiga, REBOL
Mahadev Satyanarayanan – file systems, distributed systems, mobile computing, pervasive computing
Walter Savitch – discovery of complexity class NL, Savitch's theorem, natural language processing, mathematical linguistics
Jonathan Schaeffer
Wilhelm Schickard – one of the first calculating machines
Jürgen Schmidhuber – artificial intelligence, deep learning, artificial neural networks, recurrent neural networks, Gödel machine, artificial curiosity, meta-learning
Steve Schneider – formal methods, security
Bruce Schneier – cryptography, security
Fred B. Schneider – concurrent and distributed computing
Sarita Schoenebeck — human–computer interaction
Glenda Schroeder – command-line shell, e-mail
Bernhard Schölkopf – machine learning, artificial intelligence
Dana Scott – domain theory
Michael L. Scott – programming languages, algorithms, distributed computing
Robert Sedgewick – algorithms, data structures
Ravi Sethi – compilers, 2nd Dragon Book
Nigel Shadbolt
Adi Shamir – RSA, cryptanalysis
Claude Shannon – information theory
David E. Shaw – computational finance, computational biochemistry, parallel architectures
Cliff Shaw – systems programmer, artificial intelligence
Scott Shenker – networking
Ben Shneiderman – human–computer interaction, information visualization
Edward H. Shortliffe – MYCIN (medical diagnostic expert system)
Daniel Siewiorek – electronic design automation, reliability computing, context aware mobile computing, wearable computing, computer-aided design, rapid prototyping, fault tolerance
Joseph Sifakis – model checking
Herbert A. Simon – artificial intelligence
Munindar P. Singh – multiagent systems, software engineering, artificial intelligence, social networks
Ramesh Sitaraman – helped build Akamai's high performance network
Daniel Sleator – splay tree, amortized analysis
Aaron Sloman – artificial intelligence and cognitive science
Arne Sølvberg – information modelling
Brian Cantwell Smith – reflection (computer science), 3lisp
Steven Spewak – enterprise architecture planning
Carol Spradling
Robert Sproull
Rohini Kesavan Srihari – information retrieval, text analytics, multilingual text mining
Sargur Srihari – pattern recognition, machine learning, computational criminology, CEDAR-FOX
Maciej Stachowiak – GNOME, Safari, WebKit
Richard Stallman (born 1953) – GNU Project
Ronald Stamper
Richard E. Stearns – computational complexity theory
Guy L. Steele, Jr. – Scheme, Common Lisp
Thomas Sterling – creator of Beowulf clusters
Alexander Stepanov – generic programming
W. Richard Stevens (1951–1999) – author of books, including TCP/IP Illustrated and Advanced Programming in the Unix Environment
Larry Stockmeyer – computational complexity, distributed computing
Salvatore Stolfo – computer security, machine learning
Michael Stonebraker – relational database practice and theory
Olaf Storaasli – finite element machine, linear algebra, high performance computing
Christopher Strachey – denotational semantics
Volker Strassen – matrix multiplication, integer multiplication, Solovay–Strassen primality test
Bjarne Stroustrup – C++
Madhu Sudan – computational complexity theory, coding theory
Gerald Jay Sussman – Scheme
Bert Sutherland – graphics, Internet
Ivan Sutherland – graphics
Mario Szegedy – complexity theory, quantum computing
T
Parisa Tabriz – Google Director of Engineering, also known as the Security Princess
Roberto Tamassia – computational geometry, computer security
Andrew S. Tanenbaum – operating systems, MINIX
Austin Tate – Artificial Intelligence Applications, AI Planning, Virtual Worlds
Bernhard Thalheim – conceptual modelling foundation
Éva Tardos
Gábor Tardos
Robert Tarjan – splay tree
Valerie Taylor
Mario Tchou – Italian engineer, of Chinese descent, leader of Olivetti Elea project
Jaime Teevan
Shang-Hua Teng – analysis of algorithms
Larry Tesler – human–computer interaction, graphical user interface, Apple Macintosh
Avie Tevanian – Mach kernel team, NeXT, Mac OS X
Charles P. Thacker – Xerox Alto, Microsoft Research
Daniel Thalmann – computer graphics, virtual actor
Ken Thompson – Unix
Sebastian Thrun – AI researcher, pioneered autonomous driving
Walter F. Tichy – RCS
Seinosuke Toda – computation complexity, recipient of 1998 Gödel Prize
Linus Torvalds – Linux kernel, Git
Leonardo Torres y Quevedo (1852–1936) – invented El Ajedrecista (the chess player) in 1912, a true automaton built to play chess without human guidance. In his work Essays on Automatics (1913), introduced the idea of floating-point arithmetic. In 1920, built an early electromechanical device of the Analytical Engine.
Godfried Toussaint – computational geometry, computational music theory
Gloria Townsend
Edwin E. Tozer – business information systems
Joseph F Traub – computational complexity of scientific problems
John V. Tucker – computability theory
John Tukey – founder of FFT algorithm, box plot, exploratory data analysis and Coining the term 'bit'
Alan Turing (1912–1954) – British computing pioneer, Turing machine, algorithms, cryptology, computer architecture
David Turner – SASL, Kent Recursive Calculator, Miranda, IFIP WG 2.1 member
Murray Turoff – computer-mediated communication
U
Jeffrey D. Ullman – compilers, databases, complexity theory
Umar Saif
V
Leslie Valiant – computational complexity theory, computational learning theory
Vladimir Vapnik – pattern recognition, computational learning theory
Moshe Vardi – professor of computer science at Rice University
Dorothy Vaughan
Umesh Vazirani
Manuela M. Veloso
François Vernadat – enterprise modeling
Richard Veryard – enterprise modeling
Sergiy Vilkomir – software testing, RC/DC
Paul Vitanyi – Kolmogorov complexity, Information distance, Normalized compression distance, Normalized Google distance
Andrew Viterbi – Viterbi algorithm
Jeffrey Scott Vitter – external memory algorithms, compressed data structures, data compression, databases
Paul Vixie – DNS, BIND, PAIX, Internet Software Consortium, MAPS, DNSBL
W
Eiiti Wada – ALGOL N, IFIP WG 2.1 member, Japanese Industrial Standards (JIS) X 0208, 0212, Happy Hacking Keyboard
David Wagner – security, cryptography
David Waltz
James Z. Wang
Steve Ward
Manfred K. Warmuth – computational learning theory
David H. D. Warren – AI, logic programming, Prolog, Warren Abstract Machine (WAM)
Kevin Warwick – artificial intelligence
Jan Weglarz
Philip Wadler – functional programming, Haskell, Monad, Java, Logic
Peter Wegner – object-oriented programming, interaction (computer science)
Joseph Henry Wegstein – ALGOL 58, ALGOL 60, IFIP WG 2.1 member, data processing technical standards, fingerprint analysis
Peter J. Weinberger – programming language design, the 'w' in AWK
Mark Weiser – ubiquitous computing
Joseph Weizenbaum – artificial intelligence, ELIZA
David Wheeler – EDSAC, subroutines
Franklin H. Westervelt – use of computers in engineering education, conversational use of computers, Michigan Terminal System (MTS), ARPANET, distance learning
Steve Whittaker – human computer interaction, computer support for cooperative work, social media
Jennifer Widom – nontraditional data management
Gio Wiederhold – database management systems
Norbert Wiener – Cybernetics
Adriaan van Wijngaarden – Dutch pioneer; ARRA, ALGOL, IFIP WG 2.1 member
Mary Allen Wilkes – LINC developer, assembler-linker designer
Maurice Vincent Wilkes – microprogramming, EDSAC
Yorick Wilks – computational linguistics, artificial intelligence
James H. Wilkinson – numerical analysis
Sophie Wilson – ARM architecture
Shmuel Winograd – Coppersmith–Winograd algorithm
Terry Winograd – artificial intelligence, SHRDLU
Patrick Winston – artificial intelligence
Niklaus Wirth – ALGOL W, IFIP WG 2.1 member, Pascal, Modula, Oberon
Neil Wiseman – computer graphics
Dennis E. Wisnosky – Integrated Computer-Aided Manufacturing (ICAM), IDEF
Stephen Wolfram – Mathematica
Mike Woodger – Pilot ACE, ALGOL 60, Ada (programming language)
Philip Woodward – ambiguity function, sinc function, comb operator, rep operator, ALGOL 68-R
Beatrice Helen Worsley – wrote the first PhD dissertation involving modern computers; was one of the people who wrote Transcode
Steve Wozniak – engineered first generation personal computers at Apple Computer
Jie Wu – computer networks
William Wulf – BLISS system programming language + optimizing compiler, Hydra operating system, Tartan Laboratories
Y
Mihalis Yannakakis
Andrew Chi-Chih Yao
John Yen
Nobuo Yoneda – Yoneda lemma, Yoneda product, ALGOL, IFIP WG 2.1 member
Edward Yourdon – Structured Systems Analysis and Design Method
Moti Yung
Z
Lotfi Zadeh – fuzzy logic
Hans Zantema – termination analysis
Arif Zaman – pseudo-random number generator
Stanley Zdonik — database management systems
Hussein Zedan – formal methods and real-time systems
Shlomo Zilberstein – artificial intelligence, anytime algorithms, automated planning, and decentralized POMDPs
Jill Zimmerman – James M. Beall Professor of Mathematics and Computer Science at Goucher College
Konrad Zuse – German pioneer of hardware and software
See also
List of computing people
List of important publications in computer science
List of Jewish American computer scientists
List of members of the National Academy of Sciences (computer and information sciences)
List of pioneers in computer science
List of programmers
List of programming language researchers
List of Russian IT developers
List of Slovenian computer scientists
List of Indian computer scientists
References
External links
CiteSeer list of the most cited authors in computer science
Computer scientists with h-index >= 40
Lists of people by occupation |
7088 | https://en.wikipedia.org/wiki/List%20of%20cryptographers | List of cryptographers | This is a list of cryptographers. Cryptography is the practice and study of techniques for secure communication in the presence of third parties called adversaries.
Pre twentieth century
Al-Khalil ibn Ahmad al-Farahidi: wrote a (now lost) book on cryptography titled the "Book of Cryptographic Messages".
Al-Kindi, 9th century Arabic polymath and originator of frequency analysis.
Athanasius Kircher, attempts to decipher crypted messages
Augustus the Younger, Duke of Brunswick-Lüneburg, wrote a standard book on cryptography
Ibn Wahshiyya: published several cipher alphabets that were used to encrypt magic formulas.
John Dee, wrote an occult book, which in fact was a cover for crypted text
Ibn 'Adlan: 13th-century cryptographer who made important contributions on the sample size of the frequency analysis.
Duke of Mantua Francesco I Gonzaga is the one who used the earliest example of homophonic Substitution cipher in early 1400s.
Ibn al-Durayhim: gave detailed descriptions of eight cipher systems that discussed substitution ciphers, leading to the earliest suggestion of a "tableau" of the kind that two centuries later became known as the "Vigenère table".
Ahmad al-Qalqashandi: Author of Subh al-a 'sha, a fourteen volume encyclopedia in Arabic, which included a section on cryptology. The list of ciphers in this work included both substitution and transposition, and for the first time, a cipher with multiple substitutions for each plaintext letter.
Charles Babbage, UK, 19th century mathematician who, about the time of the Crimean War, secretly developed an effective attack against polyalphabetic substitution ciphers.
Leone Battista Alberti, polymath/universal genius, inventor of polyalphabetic substitution (more specifically, the Alberti cipher), and what may have been the first mechanical encryption aid.
Giovanni Battista della Porta, author of a seminal work on cryptanalysis.
Étienne Bazeries, French, military, considered one of the greatest natural cryptanalysts. Best known for developing the "Bazeries Cylinder" and his influential 1901 text Les Chiffres secrets dévoilés ("Secret ciphers unveiled").
Giovan Battista Bellaso, Italian cryptologist
Giovanni Fontana (engineer), wrote two encrypted books
Hildegard of Bingen used her own alphabet to write letters.
Julius Caesar, Roman general/politician, has the Caesar cipher named after him, and a lost work on cryptography by Probus (probably Valerius Probus) is claimed to have covered his use of military cryptography in some detail. It is likely that he did not invent the cipher named after him, as other substitution ciphers were in use well before his time.
Friedrich Kasiski, author of the first published attack on the Vigenère cipher, now known as the Kasiski test.
Auguste Kerckhoffs, known for contributing cipher design principles.
Edgar Allan Poe, author of the book, A Few Words on Secret Writing, an essay on cryptanalysis, and The Gold Bug, a short story featuring the use of letter frequencies in the solution of a cryptogram.
Johannes Trithemius, mystic and first to describe tableaux (tables) for use in polyalphabetic substitution. Wrote an early work on steganography and cryptography generally.
Philips van Marnix, lord of Sint-Aldegonde, deciphered Spanish messages for William the Silent during the Dutch revolt against the Spanish.
John Wallis codebreaker for Cromwell and Charles II
Sir Charles Wheatstone, inventor of the so-called Playfair cipher and general polymath.
World War I and World War II wartime cryptographers
Lambros D. Callimahos, US, NSA, worked with William F. Friedman, taught NSA cryptanalysts.
Ann Z. Caracristi, US, SIS, solved Japanese Army codes in WW II, later became Deputy Director of National Security Agency.
Alec Naylor Dakin, UK, Hut 4, Bletchley park during World War II.
Ludomir Danilewicz, Poland, Biuro Szyfrow, helped to construct the Enigma machine copies to break the ciphers.
Alastair Denniston, UK, director of GC&CS at Bletchley Park from 1919 to 1942.
Agnes Meyer Driscoll, US, broke several Japanese ciphers.
Genevieve Grotjan Feinstein, US, SIS, noticed the pattern that led to breaking Purple.
Elizebeth Smith Friedman, US, Coast Guard and US Treasury Department cryptographer, co-invented modern cryptography.
William F. Friedman, US, SIS, introduced statistical methods into cryptography.
Cecilia Elspeth Giles, UK, Bletchley Park
Jack Good UK, GC&CS, Bletchley Park worked with Alan Turing on the statistical approach to cryptanalysis.
Nigel de Grey, UK, Room 40, played an important role in the decryption of the Zimmermann Telegram during World War I.
Dillwyn Knox, UK, Room 40 and GC&CS, broke commercial Enigma cipher as used by the Abwehr (German military intelligence).
Solomon Kullback US, SIS, helped break the Japanese Red cipher, later Chief Scientist at the National Security Agency.
Frank W. Lewis US, worked with William F. Friedman, puzzle master
William Hamilton Martin and Bernon F. Mitchell, U.S. National Security Agency cryptologists who defected to the Soviet Union in 1960
Leo Marks UK, SOE cryptography director, author and playwright.
Donald Michie UK, GC&CS, Bletchley Park worked on Cryptanalysis of the Lorenz cipher and the Colossus computer.
Max Newman, UK, GC&CS, Bletchley Park headed the section that developed the Colossus computer for Cryptanalysis of the Lorenz cipher.
Georges Painvin French, broke the ADFGVX cipher during the First World War.
Marian Rejewski, Poland, Biuro Szyfrów, a Polish mathematician and cryptologist who, in 1932, solved the Enigma machine with plugboard, the main cipher device then in use by Germany.
John Joseph Rochefort US, made major contributions to the break into JN-25 after the attack on Pearl Harbor.
Leo Rosen US, SIS, deduced that the Japanese Purple machine was built with stepping switches.
Frank Rowlett US, SIS, leader of the team that broke Purple.
Jerzy Różycki, Poland, Biuro Szyfrów, helped break German Enigma ciphers.
Luigi Sacco, Italy, Italian General and author of the Manual of Cryptography.
Laurance Safford US, chief cryptographer for the US Navy for 2 decades+, including World War II.
Abraham Sinkov US, SIS.
John Tiltman UK, Brigadier, Room 40, GC&CS, Bletchley Park, GCHQ, NSA. Extraordinary length and range of cryptographic service
Alan Mathison Turing UK, GC&CS, Bletchley Park where he was chief cryptographer, inventor of the Bombe that was used in decrypting Enigma, mathematician, logician, and renowned pioneer of Computer Science.
William Thomas Tutte UK, GC&CS, Bletchley Park, with John Tiltman, broke Lorenz SZ 40/42 encryption machine (codenamed Tunny) leading to the development of the Colossus computer.
William Stone Weedon, US,
Gordon Welchman UK, GC&CS, Bletchley Park where he was head of Hut Six (German Army and Air Force Enigma cipher. decryption), made an important contribution to the design of the Bombe.
Herbert Yardley US, MI8 (US), author "The American Black Chamber", worked in China as a cryptographer and briefly in Canada.
Henryk Zygalski, Poland, Biuro Szyfrów, helped break German Enigma ciphers.
Karl Stein German, Head of the Division IVa (security of own processes) at Cipher Department of the High Command of the Wehrmacht. Discoverer of Stein manifold.
Gisbert Hasenjaeger German, Tester of the Enigma. Discovered new proof of the completeness theorem of Kurt Gödel for predicate logic.
Heinrich Scholz German, Worked in Division IVa at OKW. Logician and pen friend of Alan Turning.
Gottfried Köthe German, Cryptanalyst at OKW. Mathematician created theory of topological vector spaces.
Ernst Witt German, Mathematician at OKW. Mathematical Discoveries Named After Ernst Witt.
Helmut Grunsky German, worked in complex analysis and geometric function theory. He introduced Grunsky's theorem and the Grunsky inequalities.
Georg Hamel.
Oswald Teichmüller German, Temporarily employed at OKW as cryptanalyst. Introduced quasiconformal mappings and differential geometric methods into complex analysis. Described by Friedrich L. Bauer as an extreme Nazi and a true genius.
Hans Rohrbach German, Mathematician at AA/Pers Z, the German department of state, civilian diplomatic cryptological agency.
Wolfgang Franz German, Mathematician who worked at OKW. Later significant discoveries in Topology.
Werner Weber German, Mathematician at OKW.
Georg Aumann German, Mathematician at OKW. His doctoral student was Friedrich L. Bauer.
Otto Leiberich German, Mathematician who worked as a linguist at the Cipher Department of the High Command of the Wehrmacht.
Alexander Aigner German, Mathematician who worked at OKW.
Erich Hüttenhain German, Chief cryptanalyst of and led Chi IV (section 4) of the Cipher Department of the High Command of the Wehrmacht. A German mathematician and cryptanalyst who tested a number of German cipher machines and found them to be breakable.
Wilhelm Fenner German, Chief Cryptologist and Director of Cipher Department of the High Command of the Wehrmacht.
Walther Fricke German, Worked alongside Dr Erich Hüttenhain at Cipher Department of the High Command of the Wehrmacht. Mathematician, logician, cryptanalyst and linguist.
Fritz Menzer German. Inventor of SG39 and SG41.
Other pre-computer
Rosario Candela, US, Architect and notable amateur cryptologist who authored books and taught classes on the subject to civilians at Hunter College.
Claude Elwood Shannon, US, founder of information theory, proved the one-time pad to be unbreakable.
Modern
See also: Category:Modern cryptographers for a more exhaustive list.
Symmetric-key algorithm inventors
Ross Anderson, UK, University of Cambridge, co-inventor of the Serpent cipher.
Paulo S. L. M. Barreto, Brazilian, University of São Paulo, co-inventor of the Whirlpool hash function.
George Blakley, US, independent inventor of secret sharing.
Eli Biham, Israel, co-inventor of the Serpent cipher.
Don Coppersmith, co-inventor of DES and MARS ciphers.
Joan Daemen, Belgian, co-developer of Rijndael which became the Advanced Encryption Standard (AES), and Keccak which became SHA-3.
Horst Feistel, German, IBM, namesake of Feistel networks and Lucifer cipher.
Lars Knudsen, Denmark, co-inventor of the Serpent cipher.
Ralph Merkle, US, inventor of Merkle trees.
Bart Preneel, Belgian, co-inventor of RIPEMD-160.
Vincent Rijmen, Belgian, co-developer of Rijndael which became the Advanced Encryption Standard (AES).
Ronald L. Rivest, US, MIT, inventor of RC cipher series and MD algorithm series.
Bruce Schneier, US, inventor of Blowfish and co-inventor of Twofish and Threefish.
Xuejia Lai, CH, co-inventor of International Data Encryption Algorithm (IDEA).
Adi Shamir, Israel, Weizmann Institute, inventor of secret sharing.
Asymmetric-key algorithm inventors
Leonard Adleman, US, USC, the 'A' in RSA.
David Chaum, US, inventor of blind signatures.
Clifford Cocks, UK GCHQ first inventor of RSA, a fact that remained secret until 1997 and so was unknown to Rivest, Shamir, and Adleman.
Whitfield Diffie, US, (public) co-inventor of the Diffie-Hellman key-exchange protocol.
Taher Elgamal, US (born Egyptian), inventor of the Elgamal discrete log cryptosystem.
Shafi Goldwasser, US and Israel, MIT and Weizmann Institute, co-discoverer of zero-knowledge proofs, and of Semantic security.
Martin Hellman, US, (public) co-inventor of the Diffie-Hellman key-exchange protocol.
Neal Koblitz, independent co-creator of elliptic curve cryptography.
Alfred Menezes, co-inventor of MQV, an elliptic curve technique.
Silvio Micali, US (born Italian), MIT, co-discoverer of zero-knowledge proofs, and of Semantic security.
Victor Miller, independent co-creator of elliptic curve cryptography.
David Naccache, inventor of the Naccache–Stern cryptosystem and of the Naccache–Stern knapsack cryptosystem.
Moni Naor, co-inventor the Naor-Yung encryption paradigm for CCA security.
Pascal Paillier, inventor of Paillier encryption.
Michael O. Rabin, Israel, inventor of Rabin encryption.
Ronald L. Rivest, US, MIT, the 'R' in RSA.
Adi Shamir, Israel, Weizmann Institute, the 'S' in RSA.
Moti Yung, co-inventor the Naor-Yung encryption paradigm for CCA security, of Threshold cryptosystems, and Proactive Cryptosystems.
Cryptanalysts
Joan Clarke, English cryptanalyst and numismatist best known for her work as a code-breaker at Bletchley Park during the Second World War.
Ross Anderson, UK.
Eli Biham, Israel, co-discoverer of differential cryptanalysis and Related-key attack.
Matt Blaze, US.
Dan Boneh, US, Stanford University.
Niels Ferguson, Netherlands, co-inventor of Twofish and Fortuna.
Ian Goldberg, Canada, University of Waterloo.
Lars Knudsen, Denmark, DTU, discovered integral cryptanalysis.
Paul Kocher, US, discovered differential power analysis.
Mitsuru Matsui, Japan, discoverer of linear cryptanalysis.
David Wagner, US, UC Berkeley, co-discoverer of the slide and boomerang attacks.
Xiaoyun Wang, the People's Republic of China, known for MD5 and SHA-1 hash function attacks.
Alex Biryukov, University of Luxembourg, known for impossible differential cryptanalysis and slide attack.
Moti Yung, Kleptography.
Algorithmic number theorists
Daniel J. Bernstein, US, developed several popular algorithms, fought US government restrictions in Bernstein v. United States.
Don Coppersmith, US
Dorian M. Goldfeld, US. Along with Michael Anshel and Iris Anshel invented the Anshel–Anshel–Goldfeld key exchange and the Algebraic Eraser. They also helped found Braid Group Cryptography.
Theoreticians
Mihir Bellare, US, UCSD, co-proposer of the Random oracle model.
Dan Boneh, US, Stanford.
Gilles Brassard, Canada, Université de Montréal. Co-inventor of quantum cryptography.
Claude Crépeau, Canada, McGill University.
Oded Goldreich, Israel, Weizmann Institute, author of Foundations of Cryptography.
Shafi Goldwasser, US and Israel.
Silvio Micali, US, MIT.
Rafail Ostrovsky, US, UCLA.
Charles Rackoff, co-discoverer of zero-knowledge proofs.
Oded Regev, inventor of learning with errors.
Phillip Rogaway, US, UC Davis, co-proposer of the Random oracle model.
Amit Sahai, US, UCLA.
Gustavus Simmons, US, Sandia, authentication theory.
Moti Yung, US, Google.
Government cryptographers
Clifford Cocks, UK, GCHQ, secret inventor of the algorithm later known as RSA.
James H. Ellis, UK, GCHQ, secretly proved the possibility of asymmetric encryption.
Lowell Frazer, USA, National Security Agency
Julia Wetzel, USA, National Security Agency
Malcolm Williamson, UK, GCHQ, secret inventor of the protocol later known as the Diffie–Hellman key exchange.
Cryptographer businesspeople
Bruce Schneier, US, CTO and founder of Counterpane Internet Security, Inc. and cryptography author.
Scott Vanstone, Canada, founder of Certicom and elliptic curve cryptography proponent.
See also
Cryptography
References
External links
List of cryptographers' home pages
Cryptographers
Cryptographers |
7110 | https://en.wikipedia.org/wiki/CSS%20%28disambiguation%29 | CSS (disambiguation) | CSS, or Cascading Style Sheets, is a language used to describe the style of document presentations in web development.
CSS may also refer to:
Computing and telecommunications
Central Structure Store book, in the PHIGS 3D API
Chirp spread spectrum, a modulation concept, part of the standard IEEE 802.15.4aCSS
Proprietary software, software that is not distributed with source code; sometimes known as closed-source software
Computational social science, academic sub-disciplines concerned with computational approaches to the social sciences
Content Scramble System, an encryption algorithm in DVDs
Content Services Switch, a family of load balancers produced by Cisco
CSS code, a type of error-correcting code in quantum information theory
Arts and entertainment
Campus SuperStar, a popular Singapore school-based singing competition
Closed Shell Syndrome, a fictional disease in the Ghost in the Shell television series
Comcast/Charter Sports Southeast, a defunct southeast U.S. sports cable television network
Counter-Strike: Source, an online first-person shooter computer game
CSS (band), Cansei de Ser Sexy, a Brazilian electro-rock band
Government
Canadian Survey Ship, of the Canadian Hydrographic Service
Center for Strategic Studies in Iran
Central Security Service, the military component of the US National Security Agency
Central Superior Services of Pakistan
Chicago South Shore and South Bend Railroad, a U.S. railroad
Committee for State Security (Bulgaria), a former name for the Bulgarian secret service
KGB, the Committee for State Security, the Soviet Union's security agency
Supreme Security Council of Moldova, named (CSS) in Romanian
Military
Combat service support
Confederate Secret Service, the secret service operations of the Confederate States of America during the American Civil War
Confederate States Ship, a ship of the historical naval branch of the Confederate States armed forces
Dongfeng missile, a Chinese surface-to-surface missile system (NATO designation code CSS)
Schools and education
Centennial Secondary School (disambiguation)
Certificat de Sécurité Sauvetage, the former name of Certificat de formation à la sécurité, the French national degree required to be flight attendant in France
Chase Secondary School, British Columbia, Canada
Clementi Secondary School, Hong Kong SAR, China
College of Social Studies, at Wesleyan University, Middletown, Connecticut, USA
College of St. Scholastica, Duluth, Minnesota, USA
Colorado Springs School, Colorado Springs, CO, USA
Columbia Secondary School, New York, NY, USA
Commonwealth Secondary School, Jurong East, Singapore
Courtice Secondary School, Courtice, Canada
CSS Profile, College Scholarship Service Profile, a U.S. student aid application form
Space
Chinese space station, a modular space station project
Catalina Sky Survey, an astronomical survey
Commercial space station (disambiguation)
Control stick steering, a method of flying the Space Shuttle manually
Other organisations
CS Sfaxien, a Tunisian sport club
Comcast/Charter Sports Southeast, a cable-exclusive regional sports television network
Citizens Signpost Service, a body of the European Commission
Community Service Society of New York
Congregation of the Sacred Stigmata, or Stigmatines, a Catholic religious order
Cryptogamic Society of Scotland, a Scottish botanical research society
Medicine and health science
Cancer-specific survival, survival rates specific to cancer type
Cytokine storm syndrome
Churg–Strauss syndrome, a type of autoimmune vasculitis, also known as eosinophilic granulomatosis with polyangiitis
Cross-sectional study, a study collecting data across a population at one point in time
Coronary steal syndrome, the syndrome resulting from the blood flow problem called coronary steal
Carotid sinus syndrome (carotid sinus syncope)—see Carotid sinus § Disease of the carotid sinus
Other uses
Chessington South railway station, a National Rail station code in England
Chicago South Shore and South Bend Railroad, a freight railroad between Chicago, Illinois, and South Bend, Indiana
Constant surface speed, a mode of machine tool operation, an aspect of speeds and feeds
Context-sensitive solutions, in transportation planning
Customer satisfaction survey, a tool used in customer satisfaction research
Cyclic steam stimulation, an oil field extraction technique; see Steam injection (oil industry)
Cab Signaling System, a train protection system
Close-space sublimation, a method for producing thin film solar cells, esp. Cadmium telluride
Competition Scratch Score, an element of the golf handicapping system in the United Kingdom and Republic of Ireland
The ISO 639-3 code for Southern Ohlone, also known as Costanoan, an indigenous language or language family spoken in California
See also
Cross-site scripting (XSS) |
7331 | https://en.wikipedia.org/wiki/Cellular%20digital%20packet%20data | Cellular digital packet data | Cellular Digital Packet Data (CDPD) was a wide-area mobile data service which used unused bandwidth normally used by AMPS mobile phones between 800 and 900 MHz to transfer data. Speeds up to 19.2 kbit/s were possible, though real world speeds seldom reached higher than 9.6 kbit/s. The service was discontinued in conjunction with the retirement of the parent AMPS service; it has been functionally replaced by faster services such as 1xRTT, EV-DO, and UMTS/HSPA.
Developed in the early 1990s, CDPD was large on the horizon as a future technology. However, it had difficulty competing against existing slower but less expensive Mobitex and DataTac systems, and never quite gained widespread acceptance before newer, faster standards such as GPRS became dominant.
CDPD had very limited consumer products. AT&T Wireless first sold the technology in the United States under the PocketNet brand. It was one of the first products of wireless web service. Digital Ocean, Inc. an OEM licensee of the Apple Newton, sold the Seahorse product, which integrated the Newton handheld computer, an AMPS/CDPD handset/modem along with a web browser in 1996, winning the CTIA's hardware product of the year award as a smartphone, arguably the world's first. A company named OmniSky provided service for Palm V devices. Omnisky then filed for bankruptcy in 2001 then was picked up by EarthLink Wireless the technician that developed the tech support for all of the wireless technology was a man by the name of Myron Feasel he was brought from company to company ending up at Palm. Sierra Wireless sold PCMCIA devices and Airlink sold a serial modem.
Both of these were used by police and fire departments for dispatch. Wirelesss later sold CDPD under the Wireless Internet brand (not to be confused with Wireless Internet Express, their brand for GPRS/EDGE data). PocketNet was generally considered a failure with competition from 2G services such as Sprint's Wireless Web. AT&T Wireless sold four PocketNet Phone models to the public: the Samsung Duette and the Mitsubishi MobileAccess-120 were AMPS/CDPD PocketNet phones introduced in October 1997; and two IS-136/CDPD Digital PocketNet phones, the Mitsubishi T-250 and the Ericsson R289LX.
Despite its limited success as a consumer offering, CDPD was adopted in a number of enterprise and government networks. It was particularly popular as a first-generation wireless data solution for telemetry devices (machine to machine communications) and for public safety mobile data terminals.
In 2004, major carriers in the United States announced plans to shut down CDPD service. In July 2005, the AT&T Wireless and Cingular Wireless CDPD networks were shut down. Equipment for this service now has little to no residual value.
CDPD Network and system
Primary elements of a CDPD network are:
1. End systems: physical & logical end systems that exchange information
2. Intermediate systems: CDPD infrastructure elements that store, forward & route the information
There are 2 kinds of End systems
1. Mobile end system: subscriber unit to access CDPD network over a wireless interface
2. Fixed end system: common host/server that is connected to the CDPD backbone and providing access to specific application and data
There are 2 kinds of Intermediate systems
1. Generic intermediate system: simple router with no knowledge of mobility issues
2. mobile data intermediate system: specialized intermediate system that routes data based on its knowledge of the current location of Mobile end system. It is a set of hardware and software functions that provide switching, accounting, registration, authentication, encryption, and so on.
The design of CDPD was based on several design objectives that are often repeated in designing overlay networks or new networks. A lot of emphasis was laid on open architectures and reusing as much of the existing RF infrastructure as possible. The design goal of CDPD included location independence and independence fro, service provider, so that coverage could be maximized ; application transparency and multiprotocol support, interoperability between products from multiple vendors.
External links
CIO CDPD article
History and Development
Detailed Description About CDPD
First generation mobile telecommunications |
7398 | https://en.wikipedia.org/wiki/Computer%20security | Computer security | Computer security, cybersecurity, or information technology security (IT security) is the protection of computer systems and networks from information disclosure, theft of or damage to their hardware, software, or electronic data, as well as from the disruption or misdirection of the services they provide.
The field is becoming increasingly significant due to the continuously expanding reliance on computer systems, the Internet and wireless network standards such as Bluetooth and Wi-Fi, and due to the growth of "smart" devices, including smartphones, televisions, and the various devices that constitute the "Internet of things". Cybersecurity is also one of the significant challenges in the contemporary world, due to its complexity, both in terms of political usage and technology. Its primary goal is to ensure the system's dependability, integrity, and data privacy.
History
Since the Internet's arrival and with the digital transformation initiated in recent years, the notion of cybersecurity has become a familiar subject both in our professional and personal lives. Cybersecurity and cyber threats have been constant for the last 50 years of technological change. In the 1970s and 1980s, computer security was mainly limited to academia until the conception of the Internet, where, with increased connectivity, computer viruses and network intrusions began to take off. After the spread of viruses in the 1990s, the 2000s marked the institutionalization of cyber threats and cybersecurity.
Finally, from the 2010s, large-scale attacks and government regulations started emerging.
The April 1967 session organized by Willis Ware at the Spring Joint Computer Conference, and the later publication of the Ware Report, were foundational moments in the history of the field of computer security. Ware's work straddled the intersection of material, cultural, political, and social concerns.
A 1977 NIST publication introduced the "CIA triad" of Confidentiality, Integrity, and Availability as a clear and simple way to describe key security goals. While still relevant, many more elaborate frameworks have since been proposed.
However, the 1970s and 1980s didn't have any grave computer threats because computers and the internet were still developing, and security threats were easily identifiable. Most often, threats came from malicious insiders who gained unauthorized access to sensitive documents and files. Although malware and network breaches existed during the early years, they did not use them for financial gain. However, by the second half of the 1970s, established computer firms like IBM started offering commercial access control systems and computer security software products.
It started with Creeper in 1971. Creeper was an experimental computer program written by Bob Thomas at BBN. It is considered the first computer worm.
In 1972, the first anti-virus software was created, called Reaper. It was created by Ray Tomlinson to move across the ARPANET and delete the Creeper worm.
Between September 1986 and June 1987, a group of German hackers performed the first documented case of cyber espionage. The group hacked into American defense contractors, universities, and military bases' networks and sold gathered information to the Soviet KGB. The group was led by Markus Hess, who was arrested on 29 June 1987. He was convicted of espionage (along with two co-conspirators) on 15 Feb 1990.
In 1988, one of the first computer worms, called Morris worm was distributed via the Internet. It gained significant mainstream media attention.
In 1993, Netscape started developing the protocol SSL, shortly after the National Center for Supercomputing Applications (NCSA) launched Mosaic 1.0, the first web browser, in 1993. Netscape had SSL version 1.0 ready in 1994, but it was never released to the public due to many serious security vulnerabilities. These weaknesses included replay attacks and a vulnerability that allowed hackers to alter unencrypted communications sent by users. However, in February 1995, Netscape launched the Version 2.0.
Failed offensive strategy
The National Security Agency (NSA) is responsible for both the protection of U.S. information systems and also for collecting foreign intelligence. These two duties are in conflict with each other. Protecting information systems includes evaluating software, identifying security flaws, and taking steps to correct the flaws, which is a defensive action. Collecting intelligence includes exploiting security flaws to extract information, which is an offensive action. Correcting security flaws makes the flaws unavailable for NSA exploitation.
The agency analyzes commonly used software in order to find security flaws, which it reserves for offensive purposes against competitors of the United States. The agency seldom takes defensive action by reporting the flaws to software producers so they can eliminate the security flaws.
The offensive strategy worked for a while, but eventually other nations, including Russia, Iran, North Korea, and China have acquired their own offensive capability, and tend to use it against the United States. NSA contractors created and sold "click-and-shoot" attack tools to U.S. agencies and close allies, but eventually the tools made their way to foreign adversaries. In 2016, NSAs own hacking tools were hacked and have been used by Russia and North Korea. NSAs employees and contractors have been recruited at high salaries by adversaries, anxious to compete in cyberwarfare.
For example, in 2007, the United States and Israel began exploiting security flaws in the Microsoft Windows operating system to attack and damage equipment used in Iran to refine nuclear materials. Iran responded by heavily investing in their own cyberwarfare capability, which they began using against the United States.
Vulnerabilities and attacks
A vulnerability is a weakness in design, implementation, operation, or internal control. Most of the vulnerabilities that have been discovered are documented in the Common Vulnerabilities and Exposures (CVE) database. An exploitable vulnerability is one for which at least one working attack or "exploit" exists. Vulnerabilities can be researched, reverse-engineered, hunted, or exploited using automated tools or customized scripts. To secure a computer system, it is important to understand the attacks that can be made against it, and these threats can typically be classified into one of these categories below:
Backdoor
A backdoor in a computer system, a cryptosystem or an algorithm, is any secret method of bypassing normal authentication or security controls. They may exist for many reasons, including by original design or from poor configuration. They may have been added by an authorized party to allow some legitimate access, or by an attacker for malicious reasons; but regardless of the motives for their existence, they create a vulnerability. Backdoors can be very hard to detect, and detection of backdoors are usually discovered by someone who has access to application source code or intimate knowledge of Operating System of the computer.
Denial-of-service attack
Denial of service attacks (DoS) are designed to make a machine or network resource unavailable to its intended users. Attackers can deny service to individual victims, such as by deliberately entering a wrong password enough consecutive times to cause the victim's account to be locked, or they may overload the capabilities of a machine or network and block all users at once. While a network attack from a single IP address can be blocked by adding a new firewall rule, many forms of Distributed denial of service (DDoS) attacks are possible, where the attack comes from a large number of points – and defending is much more difficult. Such attacks can originate from the zombie computers of a botnet or from a range of other possible techniques, including reflection and amplification attacks, where innocent systems are fooled into sending traffic to the victim.
Direct-access attacks
An unauthorized user gaining physical access to a computer is most likely able to directly copy data from it. They may also compromise security by making operating system modifications, installing software worms, keyloggers, covert listening devices or using wireless microphone. Even when the system is protected by standard security measures, these may be bypassed by booting another operating system or tool from a CD-ROM or other bootable media. Disk encryption and Trusted Platform Module are designed to prevent these attacks.
Eavesdropping
Eavesdropping is the act of surreptitiously listening to a private computer "conversation" (communication), typically between hosts on a network. For instance, programs such as Carnivore and NarusInSight have been used by the FBI and NSA to eavesdrop on the systems of internet service providers. Even machines that operate as a closed system (i.e., with no contact to the outside world) can be eavesdropped upon via monitoring the faint electromagnetic transmissions generated by the hardware; TEMPEST is a specification by the NSA referring to these attacks.
Multi-vector, polymorphic attacks
Surfacing in 2017, a new class of multi-vector, polymorphic cyber threats combined several types of attacks and changed form to avoid cybersecurity controls as they spread.
Phishing
Phishing is the attempt of acquiring sensitive information such as usernames, passwords, and credit card details directly from users by deceiving the users. Phishing is typically carried out by email spoofing or instant messaging, and it often directs users to enter details at a fake website whose "look" and "feel" are almost identical to the legitimate one. The fake website often asks for personal information, such as log-in details and passwords. This information can then be used to gain access to the individual's real account on the real website. Preying on a victim's trust, phishing can be classified as a form of social engineering. Attackers are using creative ways to gain access to real accounts. A common scam is for attackers to send fake electronic invoices to individuals showing that they recently purchased music, apps, or other, and instructing them to click on a link if the purchases were not authorized.
Privilege escalation
Privilege escalation describes a situation where an attacker with some level of restricted access is able to, without authorization, elevate their privileges or access level. For example, a standard computer user may be able to exploit a vulnerability in the system to gain access to restricted data; or even become "root" and have full unrestricted access to a system.
Reverse engineering
Reverse engineering is the process by which a man-made object is deconstructed to reveal its designs, code, architecture, or to extract knowledge from the object; similar to scientific research, the only difference being that scientific research is about a natural phenomenon.
Side-channel attack
Any computational system affects its environment in some form. This effect it has on its environment, includes a wide range of criteria, which can range from electromagnetic radiation, to residual effect on RAM cells which as a consequent make a Cold boot attack possible, to hardware implementation faults which allow for access and or guessing of other values that normally should be inaccessible. In Side-channel attack scenarios the attacker would gather such information about a system or network to guess its internal state, and as a result access the information which is assumed by the victim to be secure.
Social engineering
Social engineering, in the context of computer security, aims to convince a user to disclose secrets such as passwords, card numbers, etc. or grant physical access by, for example, impersonating a senior executive, bank, a contractor, or a customer. This generally involves exploiting peoples trust, and relying on their cognitive biases. A common scam involves emails sent to accounting and finance department personnel, impersonating their CEO and urgently requesting some action. In early 2016, the FBI reported that such "business email compromise" (BEC) scams had cost US businesses more than $2 billion in about two years.
In May 2016, the Milwaukee Bucks NBA team was the victim of this type of cyber scam with a perpetrator impersonating the team's president Peter Feigin, resulting in the handover of all the team's employees' 2015 W-2 tax forms.
Spoofing
Spoofing is an act of masquerading as a valid entity through falsification of data (such as an IP address or username), in order to gain access to information or resources that one is otherwise unauthorized to obtain. There are several types of spoofing, including:
Email spoofing, where an attacker forges the sending (From, or source) address of an email.
IP address spoofing, where an attacker alters the source IP address in a network packet to hide their identity or impersonate another computing system.
MAC spoofing, where an attacker modifies the Media Access Control (MAC) address of their network interface controller to obscure their identity, or to pose as another.
Biometric spoofing, where an attacker produces a fake biometric sample to pose as another user.
Tampering
Tampering describes a malicious modification or alteration of data. So-called Evil Maid attacks and security services planting of surveillance capability into routers are examples.
Malware
Malicious software (malware) installed on a computer can leak personal information, can give control of the system to the attacker and can delete data permanently.
Information security culture
Employee behavior can have a big impact on information security in organizations. Cultural concepts can help different segments of the organization work effectively or work against effectiveness towards information security within an organization. Information security culture is the "...totality of patterns of behavior in an organization that contributes to the protection of information of all kinds."
Andersson and Reimers (2014) found that employees often do not see themselves as part of their organization's information security effort and often take actions that impede organizational changes. Indeed, the Verizon Data Breach Investigations Report 2020, which examined 3,950 security breaches, discovered 30% of cyber security incidents involved internal actors within a company. Research shows information security culture needs to be improved continuously. In ″Information Security Culture from Analysis to Change″, authors commented, ″It's a never-ending process, a cycle of evaluation and change or maintenance.″ To manage the information security culture, five steps should be taken: pre-evaluation, strategic planning, operative planning, implementation, and post-evaluation.
Pre-evaluation: To identify the awareness of information security within employees and to analyze the current security policies.
Strategic planning: To come up with a better awareness program, clear targets need to be set. Assembling a team of skilled professionals is helpful to achieve it.
Operative planning: A good security culture can be established based on internal communication, management-buy-in, security awareness and a training program.
Implementation: Four stages should be used to implement the information security culture. They are:
Commitment of the management
Communication with organizational members
Courses for all organizational members
Commitment of the employees
Post-evaluation: To assess the success of the planning and implementation, and to identify unresolved areas of concern.
Systems at risk
The growth in the number of computer systems and the increasing reliance upon them by individuals, businesses, industries, and governments means that there is an increasing number of systems at risk.
Financial systems
The computer systems of financial regulators and financial institutions like the U.S. Securities and Exchange Commission, SWIFT, investment banks, and commercial banks are prominent hacking targets for cybercriminals interested in manipulating markets and making illicit gains. Websites and apps that accept or store credit card numbers, brokerage accounts, and bank account information are also prominent hacking targets, because of the potential for immediate financial gain from transferring money, making purchases, or selling the information on the black market. In-store payment systems and ATMs have also been tampered with in order to gather customer account data and PINs.
Utilities and industrial equipment
Computers control functions at many utilities, including coordination of telecommunications, the power grid, nuclear power plants, and valve opening and closing in water and gas networks. The Internet is a potential attack vector for such machines if connected, but the Stuxnet worm demonstrated that even equipment controlled by computers not connected to the Internet can be vulnerable. In 2014, the Computer Emergency Readiness Team, a division of the Department of Homeland Security, investigated 79 hacking incidents at energy companies.
Aviation
The aviation industry is very reliant on a series of complex systems which could be attacked. A simple power outage at one airport can cause repercussions worldwide, much of the system relies on radio transmissions which could be disrupted, and controlling aircraft over oceans is especially dangerous because radar surveillance only extends 175 to 225 miles offshore. There is also potential for attack from within an aircraft.
In Europe, with the (Pan-European Network Service) and NewPENS, and in the US with the NextGen program, air navigation service providers are moving to create their own dedicated networks.
The consequences of a successful attack range from loss of confidentiality to loss of system integrity, air traffic control outages, loss of aircraft, and even loss of life.
Consumer devices
Desktop computers and laptops are commonly targeted to gather passwords or financial account information, or to construct a botnet to attack another target. Smartphones, tablet computers, smart watches, and other mobile devices such as quantified self devices like activity trackers have sensors such as cameras, microphones, GPS receivers, compasses, and accelerometers which could be exploited, and may collect personal information, including sensitive health information. WiFi, Bluetooth, and cell phone networks on any of these devices could be used as attack vectors, and sensors might be remotely activated after a successful breach.
The increasing number of home automation devices such as the Nest thermostat are also potential targets.
Large corporations
Large corporations are common targets. In many cases attacks are aimed at financial gain through identity theft and involve data breaches. Examples include loss of millions of clients' credit card details by Home Depot, Staples, Target Corporation, and the most recent breach of Equifax.
Medical records have been targeted in general identify theft, health insurance fraud, and impersonating patients to obtain prescription drugs for recreational purposes or resale. Although cyber threats continue to increase, 62% of all organizations did not increase security training for their business in 2015.
Not all attacks are financially motivated, however: security firm HBGary Federal suffered a serious series of attacks in 2011 from hacktivist group Anonymous in retaliation for the firm's CEO claiming to have infiltrated their group, and Sony Pictures was hacked in 2014 with the apparent dual motive of embarrassing the company through data leaks and crippling the company by wiping workstations and servers.
Automobiles
Vehicles are increasingly computerized, with engine timing, cruise control, anti-lock brakes, seat belt tensioners, door locks, airbags and advanced driver-assistance systems on many models. Additionally, connected cars may use WiFi and Bluetooth to communicate with onboard consumer devices and the cell phone network. Self-driving cars are expected to be even more complex. All of these systems carry some security risk, and such issues have gained wide attention.
Simple examples of risk include a malicious compact disc being used as an attack vector, and the car's onboard microphones being used for eavesdropping. However, if access is gained to a car's internal controller area network, the danger is much greater – and in a widely publicized 2015 test, hackers remotely carjacked a vehicle from 10 miles away and drove it into a ditch.
Manufacturers are reacting numerous ways, with Tesla in 2016 pushing out some security fixes "over the air" into its cars' computer systems. In the area of autonomous vehicles, in September 2016 the United States Department of Transportation announced some initial safety standards, and called for states to come up with uniform policies.
Government
Government and military computer systems are commonly attacked by activists and foreign powers. Local and regional government infrastructure such as traffic light controls, police and intelligence agency communications, personnel records, student records, and financial systems are also potential targets as they are now all largely computerized. Passports and government ID cards that control access to facilities which use RFID can be vulnerable to cloning.
Internet of things and physical vulnerabilities
The Internet of things (IoT) is the network of physical objects such as devices, vehicles, and buildings that are embedded with electronics, software, sensors, and network connectivity that enables them to collect and exchange data. Concerns have been raised that this is being developed without appropriate consideration of the security challenges involved.
While the IoT creates opportunities for more direct integration of the physical world into computer-based systems,
it also provides opportunities for misuse. In particular, as the Internet of Things spreads widely, cyberattacks are likely to become an increasingly physical (rather than simply virtual) threat. If a front door's lock is connected to the Internet, and can be locked/unlocked from a phone, then a criminal could enter the home at the press of a button from a stolen or hacked phone. People could stand to lose much more than their credit card numbers in a world controlled by IoT-enabled devices. Thieves have also used electronic means to circumvent non-Internet-connected hotel door locks.
An attack that targets physical infrastructure and/or human lives is sometimes referred to as a cyber-kinetic attack. As IoT devices and appliances gain currency, cyber-kinetic attacks can become pervasive and significantly damaging.
Medical systems
Medical devices have either been successfully attacked or had potentially deadly vulnerabilities demonstrated, including both in-hospital diagnostic equipment and implanted devices including pacemakers and insulin pumps. There are many reports of hospitals and hospital organizations getting hacked, including ransomware attacks, Windows XP exploits, viruses, and data breaches of sensitive data stored on hospital servers. On 28 December 2016 the US Food and Drug Administration released its recommendations for how medical device manufacturers should maintain the security of Internet-connected devices – but no structure for enforcement.
Energy sector
In distributed generation systems, the risk of a cyber attack is real, according to Daily Energy Insider. An attack could cause a loss of power in a large area for a long period of time, and such an attack could have just as severe consequences as a natural disaster. The District of Columbia is considering creating a Distributed Energy Resources (DER) Authority within the city, with the goal being for customers to have more insight into their own energy use and giving the local electric utility, Pepco, the chance to better estimate energy demand. The D.C. proposal, however, would "allow third-party vendors to create numerous points of energy distribution, which could potentially create more opportunities for cyber attackers to threaten the electric grid."
Impact of security breaches
Serious financial damage has been caused by security breaches, but because there is no standard model for estimating the cost of an incident, the only data available is that which is made public by the organizations involved. "Several computer security consulting firms produce estimates of total worldwide losses attributable to virus and worm attacks and to hostile digital acts in general. The 2003 loss estimates by these firms range from $13 billion (worms and viruses only) to $226 billion (for all forms of covert attacks). The reliability of these estimates is often challenged; the underlying methodology is basically anecdotal."
However, reasonable estimates of the financial cost of security breaches can actually help organizations make rational investment decisions. According to the classic Gordon-Loeb Model analyzing the optimal investment level in information security, one can conclude that the amount a firm spends to protect information should generally be only a small fraction of the expected loss (i.e., the expected value of the loss resulting from a cyber/information security breach).
Attacker motivation
As with physical security, the motivations for breaches of computer security vary between attackers. Some are thrill-seekers or vandals, some are activists, others are criminals looking for financial gain. State-sponsored attackers are now common and well resourced but started with amateurs such as Markus Hess who hacked for the KGB, as recounted by Clifford Stoll in The Cuckoo's Egg.
Additionally, recent attacker motivations can be traced back to extremist organizations seeking to gain political advantage or disrupt social agendas. The growth of the internet, mobile technologies, and inexpensive computing devices have led to a rise in capabilities but also to the risk to environments that are deemed as vital to operations. All critical targeted environments are susceptible to compromise and this has led to a series of proactive studies on how to migrate the risk by taking into consideration motivations by these types of actors. Several stark differences exist between the hacker motivation and that of nation state actors seeking to attack based an ideological preference.
A standard part of threat modeling for any particular system is to identify what might motivate an attack on that system, and who might be motivated to breach it. The level and detail of precautions will vary depending on the system to be secured. A home personal computer, bank, and classified military network face very different threats, even when the underlying technologies in use are similar.
Computer protection (countermeasures)
In computer security, a countermeasure is an action, device, procedure or technique that reduces a threat, a vulnerability, or an attack by eliminating or preventing it, by minimizing the harm it can cause, or by discovering and reporting it so that corrective action can be taken.
Some common countermeasures are listed in the following sections:
Security by design
Security by design, or alternately secure by design, means that the software has been designed from the ground up to be secure. In this case, security is considered as a main feature.
Some of the techniques in this approach include:
The principle of least privilege, where each part of the system has only the privileges that are needed for its function. That way, even if an attacker gains access to that part, they only have limited access to the whole system.
Automated theorem proving to prove the correctness of crucial software subsystems.
Code reviews and unit testing, approaches to make modules more secure where formal correctness proofs are not possible.
Defense in depth, where the design is such that more than one subsystem needs to be violated to compromise the integrity of the system and the information it holds.
Default secure settings, and design to "fail secure" rather than "fail insecure" (see fail-safe for the equivalent in safety engineering). Ideally, a secure system should require a deliberate, conscious, knowledgeable and free decision on the part of legitimate authorities in order to make it insecure.
Audit trails tracking system activity, so that when a security breach occurs, the mechanism and extent of the breach can be determined. Storing audit trails remotely, where they can only be appended to, can keep intruders from covering their tracks.
Full disclosure of all vulnerabilities, to ensure that the "window of vulnerability" is kept as short as possible when bugs are discovered.
Security architecture
The Open Security Architecture organization defines IT security architecture as "the design artifacts that describe how the security controls (security countermeasures) are positioned, and how they relate to the overall information technology architecture. These controls serve the purpose to maintain the system's quality attributes: confidentiality, integrity, availability, accountability and assurance services".
Techopedia defines security architecture as "a unified security design that addresses the necessities and potential risks involved in a certain scenario or environment. It also specifies when and where to apply security controls. The design process is generally reproducible." The key attributes of security architecture are:
the relationship of different components and how they depend on each other.
determination of controls based on risk assessment, good practices, finances, and legal matters.
the standardization of controls.
Practicing security architecture provides the right foundation to systematically address business, IT and security concerns in an organization.
Security measures
A state of computer "security" is the conceptual ideal, attained by the use of the three processes: threat prevention, detection, and response. These processes are based on various policies and system components, which include the following:
User account access controls and cryptography can protect systems files and data, respectively.
Firewalls are by far the most common prevention systems from a network security perspective as they can (if properly configured) shield access to internal network services, and block certain kinds of attacks through packet filtering. Firewalls can be both hardware- or software-based.
Intrusion Detection System (IDS) products are designed to detect network attacks in-progress and assist in post-attack forensics, while audit trails and logs serve a similar function for individual systems.
"Response" is necessarily defined by the assessed security requirements of an individual system and may cover the range from simple upgrade of protections to notification of legal authorities, counter-attacks, and the like. In some special cases, the complete destruction of the compromised system is favored, as it may happen that not all the compromised resources are detected.
Today, computer security consists mainly of "preventive" measures, like firewalls or an exit procedure. A firewall can be defined as a way of filtering network data between a host or a network and another network, such as the Internet, and can be implemented as software running on the machine, hooking into the network stack (or, in the case of most UNIX-based operating systems such as Linux, built into the operating system kernel) to provide real-time filtering and blocking. Another implementation is a so-called "physical firewall", which consists of a separate machine filtering network traffic. Firewalls are common amongst machines that are permanently connected to the Internet.
Some organizations are turning to big data platforms, such as Apache Hadoop, to extend data accessibility and machine learning to detect advanced persistent threats.
However, relatively few organizations maintain computer systems with effective detection systems, and fewer still have organized response mechanisms in place. As a result, as Reuters points out: "Companies for the first time report they are losing more through electronic theft of data than physical stealing of assets". The primary obstacle to effective eradication of cybercrime could be traced to excessive reliance on firewalls and other automated "detection" systems. Yet it is basic evidence gathering by using packet capture appliances that puts criminals behind bars.
In order to ensure adequate security, the confidentiality, integrity and availability of a network, better known as the CIA triad, must be protected and is considered the foundation to information security. To achieve those objectives, administrative, physical and technical security measures should be employed. The amount of security afforded to an asset can only be determined when its value is known.
Vulnerability management
Vulnerability management is the cycle of identifying, and remediating or mitigating vulnerabilities, especially in software and firmware. Vulnerability management is integral to computer security and network security.
Vulnerabilities can be discovered with a vulnerability scanner, which analyzes a computer system in search of known vulnerabilities, such as open ports, insecure software configuration, and susceptibility to malware. In order for these tools to be effective, they must be kept up to date with every new update the vendors release. Typically, these updates will scan for the new vulnerabilities that were introduced recently.
Beyond vulnerability scanning, many organizations contract outside security auditors to run regular penetration tests against their systems to identify vulnerabilities. In some sectors, this is a contractual requirement.
Reducing vulnerabilities
While formal verification of the correctness of computer systems is possible, it is not yet common. Operating systems formally verified include seL4, and SYSGO's PikeOS – but these make up a very small percentage of the market.
Two factor authentication is a method for mitigating unauthorized access to a system or sensitive information. It requires "something you know"; a password or PIN, and "something you have"; a card, dongle, cellphone, or another piece of hardware. This increases security as an unauthorized person needs both of these to gain access.
Social engineering and direct computer access (physical) attacks can only be prevented by non-computer means, which can be difficult to enforce, relative to the sensitivity of the information. Training is often involved to help mitigate this risk, but even in highly disciplined environments (e.g. military organizations), social engineering attacks can still be difficult to foresee and prevent.
Inoculation, derived from inoculation theory, seeks to prevent social engineering and other fraudulent tricks or traps by instilling a resistance to persuasion attempts through exposure to similar or related attempts.
It is possible to reduce an attacker's chances by keeping systems up to date with security patches and updates, using a security scanner and/or hiring people with expertise in security, though none of these guarantee the prevention of an attack. The effects of data loss/damage can be reduced by careful backing up and insurance.
Hardware protection mechanisms
While hardware may be a source of insecurity, such as with microchip vulnerabilities maliciously introduced during the manufacturing process, hardware-based or assisted computer security also offers an alternative to software-only computer security. Using devices and methods such as dongles, trusted platform modules, intrusion-aware cases, drive locks, disabling USB ports, and mobile-enabled access may be considered more secure due to the physical access (or sophisticated backdoor access) required in order to be compromised. Each of these is covered in more detail below.
USB dongles are typically used in software licensing schemes to unlock software capabilities, but they can also be seen as a way to prevent unauthorized access to a computer or other device's software. The dongle, or key, essentially creates a secure encrypted tunnel between the software application and the key. The principle is that an encryption scheme on the dongle, such as Advanced Encryption Standard (AES) provides a stronger measure of security since it is harder to hack and replicate the dongle than to simply copy the native software to another machine and use it. Another security application for dongles is to use them for accessing web-based content such as cloud software or Virtual Private Networks (VPNs). In addition, a USB dongle can be configured to lock or unlock a computer.
Trusted platform modules (TPMs) secure devices by integrating cryptographic capabilities onto access devices, through the use of microprocessors, or so-called computers-on-a-chip. TPMs used in conjunction with server-side software offer a way to detect and authenticate hardware devices, preventing unauthorized network and data access.
Computer case intrusion detection refers to a device, typically a push-button switch, which detects when a computer case is opened. The firmware or BIOS is programmed to show an alert to the operator when the computer is booted up the next time.
Drive locks are essentially software tools to encrypt hard drives, making them inaccessible to thieves. Tools exist specifically for encrypting external drives as well.
Disabling USB ports is a security option for preventing unauthorized and malicious access to an otherwise secure computer. Infected USB dongles connected to a network from a computer inside the firewall are considered by the magazine Network World as the most common hardware threat facing computer networks.
Disconnecting or disabling peripheral devices ( like camera, GPS, removable storage etc.), that are not in use.
Mobile-enabled access devices are growing in popularity due to the ubiquitous nature of cell phones. Built-in capabilities such as Bluetooth, the newer Bluetooth low energy (LE), Near field communication (NFC) on non-iOS devices and biometric validation such as thumb print readers, as well as QR code reader software designed for mobile devices, offer new, secure ways for mobile phones to connect to access control systems. These control systems provide computer security and can also be used for controlling access to secure buildings.
Secure operating systems
One use of the term "computer security" refers to technology that is used to implement secure operating systems. In the 1980s, the United States Department of Defense (DoD) used the "Orange Book" standards, but the current international standard ISO/IEC 15408, "Common Criteria" defines a number of progressively more stringent Evaluation Assurance Levels. Many common operating systems meet the EAL4 standard of being "Methodically Designed, Tested and Reviewed", but the formal verification required for the highest levels means that they are uncommon. An example of an EAL6 ("Semiformally Verified Design and Tested") system is INTEGRITY-178B, which is used in the Airbus A380
and several military jets.
Secure coding
In software engineering, secure coding aims to guard against the accidental introduction of security vulnerabilities. It is also possible to create software designed from the ground up to be secure. Such systems are "secure by design". Beyond this, formal verification aims to prove the correctness of the algorithms underlying a system;
important for cryptographic protocols for example.
Capabilities and access control lists
Within computer systems, two of main security models capable of enforcing privilege separation are access control lists (ACLs) and role-based access control (RBAC).
An access-control list (ACL), with respect to a computer file system, is a list of permissions associated with an object. An ACL specifies which users or system processes are granted access to objects, as well as what operations are allowed on given objects.
Role-based access control is an approach to restricting system access to authorized users, used by the majority of enterprises with more than 500 employees, and can implement mandatory access control (MAC) or discretionary access control (DAC).
A further approach, capability-based security has been mostly restricted to research operating systems. Capabilities can, however, also be implemented at the language level, leading to a style of programming that is essentially a refinement of standard object-oriented design. An open-source project in the area is the E language.
End user security training
The end-user is widely recognized as the weakest link in the security chain and it is estimated that more than 90% of security incidents and breaches involve some kind of human error. Among the most commonly recorded forms of errors and misjudgment are poor password management, sending emails containing sensitive data and attachments to the wrong recipient, the inability to recognize misleading URLs and to identify fake websites and dangerous email attachments. A common mistake that users make is saving their user id/password in their browsers to make it easier to log in to banking sites. This is a gift to attackers who have obtained access to a machine by some means. The risk may be mitigated by the use of two-factor authentication.
As the human component of cyber risk is particularly relevant in determining the global cyber risk an organization is facing, security awareness training, at all levels, not only provides formal compliance with regulatory and industry mandates but is considered essential in reducing cyber risk and protecting individuals and companies from the great majority of cyber threats.
The focus on the end-user represents a profound cultural change for many security practitioners, who have traditionally approached cybersecurity exclusively from a technical perspective, and moves along the lines suggested by major security centers to develop a culture of cyber awareness within the organization, recognizing that a security-aware user provides an important line of defense against cyber attacks.
Digital hygiene
Related to end-user training, digital hygiene or cyber hygiene is a fundamental principle relating to information security and, as the analogy with personal hygiene shows, is the equivalent of establishing simple routine measures to minimize the risks from cyber threats. The assumption is that good cyber hygiene practices can give networked users another layer of protection, reducing the risk that one vulnerable node will be used to either mount attacks or compromise another node or network, especially from common cyberattacks. Cyber hygiene should also not be mistaken for proactive cyber defence, a military term.
As opposed to a purely technology-based defense against threats, cyber hygiene mostly regards routine measures that are technically simple to implement and mostly dependent on discipline or education. It can be thought of as an abstract list of tips or measures that have been demonstrated as having a positive effect on personal and/or collective digital security. As such, these measures can be performed by laypeople, not just security experts.
Cyber hygiene relates to personal hygiene as computer viruses relate to biological viruses (or pathogens). However, while the term computer virus was coined almost simultaneously with the creation of the first working computer viruses, the term cyber hygiene is a much later invention, perhaps as late as 2000 by Internet pioneer Vint Cerf. It has since been adopted by the Congress and Senate of the United States, the FBI, EU institutions and heads of state.
Response to breaches
Responding to attempted security breaches is often very difficult for a variety of reasons, including:
Identifying attackers is difficult, as they may operate through proxies, temporary anonymous dial-up accounts, wireless connections, and other anonymizing procedures which make back-tracing difficult - and are often located in another jurisdiction. If they successfully breach security, they have also often gained enough administrative access to enable them to delete logs to cover their tracks.
The sheer number of attempted attacks, often by automated vulnerability scanners and computer worms, is so large that organizations cannot spend time pursuing each.
Law enforcement officers often lack the skills, interest or budget to pursue attackers. In addition, the identification of attackers across a network may require logs from various points in the network and in many countries, which may be difficult or time-consuming to obtain.
Where an attack succeeds and a breach occurs, many jurisdictions now have in place mandatory security breach notification laws.
Types of security and privacy
Access control
Anti-keyloggers
Anti-malware
Anti-spyware
Anti-subversion software
Anti-tamper software
Anti-theft
Antivirus software
Cryptographic software
Computer-aided dispatch (CAD)
Firewall
Intrusion detection system (IDS)
Intrusion prevention system (IPS)
Log management software
Parental control
Records management
Sandbox
Security information management
Security information and event management (SIEM)
Software and operating system updating
Vulnerability Management
Incident response planning
Incident response is an organized approach to addressing and managing the aftermath of a computer security incident or compromise with the goal of preventing a breach or thwarting a cyberattack. An incident that is not identified and managed at the time of intrusion typically escalates to a more damaging event such as a data breach or system failure. The intended outcome of a computer security incident response plan is to contain the incident, limit damage and assist recovery to business as usual. Responding to compromises quickly can mitigate exploited vulnerabilities, restore services and processes and minimize losses.
Incident response planning allows an organization to establish a series of best practices to stop an intrusion before it causes damage. Typical incident response plans contain a set of written instructions that outline the organization's response to a cyberattack. Without a documented plan in place, an organization may not successfully detect an intrusion or compromise and stakeholders may not understand their roles, processes and procedures during an escalation, slowing the organization's response and resolution.
There are four key components of a computer security incident response plan:
Preparation: Preparing stakeholders on the procedures for handling computer security incidents or compromises
Detection and analysis: Identifying and investigating suspicious activity to confirm a security incident, prioritizing the response based on impact and coordinating notification of the incident
Containment, eradication and recovery: Isolating affected systems to prevent escalation and limit impact, pinpointing the genesis of the incident, removing malware, affected systems and bad actors from the environment and restoring systems and data when a threat no longer remains
Post incident activity: Post mortem analysis of the incident, its root cause and the organization's response with the intent of improving the incident response plan and future response efforts.
Notable attacks and breaches
Some illustrative examples of different types of computer security breaches are given below.
Robert Morris and the first computer worm
In 1988, 60,000 computers were connected to the Internet, and most were mainframes, minicomputers and professional workstations. On 2 November 1988, many started to slow down, because they were running a malicious code that demanded processor time and that spread itself to other computers – the first internet "computer worm". The software was traced back to 23-year-old Cornell University graduate student Robert Tappan Morris who said "he wanted to count how many machines were connected to the Internet".
Rome Laboratory
In 1994, over a hundred intrusions were made by unidentified crackers into the Rome Laboratory, the US Air Force's main command and research facility. Using trojan horses, hackers were able to obtain unrestricted access to Rome's networking systems and remove traces of their activities. The intruders were able to obtain classified files, such as air tasking order systems data and furthermore able to penetrate connected networks of National Aeronautics and Space Administration's Goddard Space Flight Center, Wright-Patterson Air Force Base, some Defense contractors, and other private sector organizations, by posing as a trusted Rome center user.
TJX customer credit card details
In early 2007, American apparel and home goods company TJX announced that it was the victim of an unauthorized computer systems intrusion and that the hackers had accessed a system that stored data on credit card, debit card, check, and merchandise return transactions.
Stuxnet attack
In 2010, the computer worm known as Stuxnet reportedly ruined almost one-fifth of Iran's nuclear centrifuges. It did so by disrupting industrial programmable logic controllers (PLCs) in a targeted attack. This is generally believed to have been launched by Israel and the United States to disrupt Iranian's nuclear program – although neither has publicly admitted this.
Global surveillance disclosures
In early 2013, documents provided by Edward Snowden were published by The Washington Post and The Guardian exposing the massive scale of NSA global surveillance. There were also indications that the NSA may have inserted a backdoor in a NIST standard for encryption. This standard was later withdrawn due to widespread criticism. The NSA additionally were revealed to have tapped the links between Google's data centers.
Target and Home Depot breaches
In 2013 and 2014, a Ukrainian hacker known as Rescator broke into Target Corporation computers in 2013, stealing roughly 40 million credit cards, and then Home Depot computers in 2014, stealing between 53 and 56 million credit card numbers. Warnings were delivered at both corporations, but ignored; physical security breaches using self checkout machines are believed to have played a large role. "The malware utilized is absolutely unsophisticated and uninteresting," says Jim Walter, director of threat intelligence operations at security technology company McAfee – meaning that the heists could have easily been stopped by existing antivirus software had administrators responded to the warnings. The size of the thefts has resulted in major attention from state and Federal United States authorities and the investigation is ongoing.
Office of Personnel Management data breach
In April 2015, the Office of Personnel Management discovered it had been hacked more than a year earlier in a data breach, resulting in the theft of approximately 21.5 million personnel records handled by the office. The Office of Personnel Management hack has been described by federal officials as among the largest breaches of government data in the history of the United States. Data targeted in the breach included personally identifiable information such as Social Security numbers, names, dates and places of birth, addresses, and fingerprints of current and former government employees as well as anyone who had undergone a government background check. It is believed the hack was perpetrated by Chinese hackers.
Ashley Madison breach
In July 2015, a hacker group known as "The Impact Team" successfully breached the extramarital relationship website Ashley Madison, created by Avid Life Media. The group claimed that they had taken not only company data but user data as well. After the breach, The Impact Team dumped emails from the company's CEO, to prove their point, and threatened to dump customer data unless the website was taken down permanently." When Avid Life Media did not take the site offline the group released two more compressed files, one 9.7GB and the second 20GB. After the second data dump, Avid Life Media CEO Noel Biderman resigned; but the website remained functioning.
Colonial Pipeline Ransomware Attack
In June 2021, the cyber attack took down the largest fuel pipeline in the U.S. and led to shortages across the East Coast.
Legal issues and global regulation
International legal issues of cyber attacks are complicated in nature. There is no global base of common rules to judge, and eventually punish, cybercrimes and cybercriminals - and where security firms or agencies do locate the cybercriminal behind the creation of a particular piece of malware or form of cyber attack, often the local authorities cannot take action due to lack of laws under which to prosecute. Proving attribution for cybercrimes and cyberattacks is also a major problem for all law enforcement agencies. "Computer viruses switch from one country to another, from one jurisdiction to another – moving around the world, using the fact that we don't have the capability to globally police operations like this. So the Internet is as if someone [had] given free plane tickets to all the online criminals of the world." The use of techniques such as dynamic DNS, fast flux and bullet proof servers add to the difficulty of investigation and enforcement.
Role of government
The role of the government is to make regulations to force companies and organizations to protect their systems, infrastructure and information from any cyberattacks, but also to protect its own national infrastructure such as the national power-grid.
The government's regulatory role in cyberspace is complicated. For some, cyberspace was seen as a virtual space that was to remain free of government intervention, as can be seen in many of today's libertarian blockchain and bitcoin discussions.
Many government officials and experts think that the government should do more and that there is a crucial need for improved regulation, mainly due to the failure of the private sector to solve efficiently the cybersecurity problem. R. Clarke said during a panel discussion at the RSA Security Conference in San Francisco, he believes that the "industry only responds when you threaten regulation. If the industry doesn't respond (to the threat), you have to follow through." On the other hand, executives from the private sector agree that improvements are necessary, but think that government intervention would affect their ability to innovate efficiently. Daniel R. McCarthy analyzed this public-private partnership in cybersecurity and reflected on the role of cybersecurity in the broader constitution of political order.
On 22 May 2020, the UN Security Council held its second ever informal meeting on cybersecurity to focus on cyber challenges to international peace. According to UN Secretary-General António Guterres, new technologies are too often used to violate rights.
International actions
Many different teams and organizations exist, including:
The Forum of Incident Response and Security Teams (FIRST) is the global association of CSIRTs. The US-CERT, AT&T, Apple, Cisco, McAfee, Microsoft are all members of this international team.
The Council of Europe helps protect societies worldwide from the threat of cybercrime through the Convention on Cybercrime.
The purpose of the Messaging Anti-Abuse Working Group (MAAWG) is to bring the messaging industry together to work collaboratively and to successfully address the various forms of messaging abuse, such as spam, viruses, denial-of-service attacks and other messaging exploitations. France Telecom, Facebook, AT&T, Apple, Cisco, Sprint are some of the members of the MAAWG.
ENISA : The European Network and Information Security Agency (ENISA) is an agency of the European Union with the objective to improve network and information security in the European Union.
Europe
On 14 April 2016 the European Parliament and Council of the European Union adopted The General Data Protection Regulation (GDPR) (EU) 2016/679. GDPR, which became enforceable beginning 25 May 2018, provides for data protection and privacy for all individuals within the European Union (EU) and the European Economic Area (EEA). GDPR requires that business processes that handle personal data be built with data protection by design and by default. GDPR also requires that certain organizations appoint a Data Protection Officer (DPO).
National actions
Computer emergency response teams
Most countries have their own computer emergency response team to protect network security.
Canada
Since 2010, Canada has had a cybersecurity strategy. This functions as a counterpart document to the National Strategy and Action Plan for Critical Infrastructure. The strategy has three main pillars: securing government systems, securing vital private cyber systems, and helping Canadians to be secure online. There is also a Cyber Incident Management Framework to provide a coordinated response in the event of a cyber incident.
The Canadian Cyber Incident Response Centre (CCIRC) is responsible for mitigating and responding to threats to Canada's critical infrastructure and cyber systems. It provides support to mitigate cyber threats, technical support to respond & recover from targeted cyber attacks, and provides online tools for members of Canada's critical infrastructure sectors. It posts regular cybersecurity bulletins & operates an online reporting tool where individuals and organizations can report a cyber incident.
To inform the general public on how to protect themselves online, Public Safety Canada has partnered with STOP.THINK.CONNECT, a coalition of non-profit, private sector, and government organizations, and launched the Cyber Security Cooperation Program. They also run the GetCyberSafe portal for Canadian citizens, and Cyber Security Awareness Month during October.
Public Safety Canada aims to begin an evaluation of Canada's cybersecurity strategy in early 2015.
China
China's Central Leading Group for Internet Security and Informatization () was established on 27 February 2014. This Leading Small Group (LSG) of the Chinese Communist Party is headed by General Secretary Xi Jinping himself and is staffed with relevant Party and state decision-makers. The LSG was created to overcome the incoherent policies and overlapping responsibilities that characterized China's former cyberspace decision-making mechanisms. The LSG oversees policy-making in the economic, political, cultural, social and military fields as they relate to network security and IT strategy. This LSG also coordinates major policy initiatives in the international arena that promote norms and standards favored by the Chinese government and that emphasizes the principle of national sovereignty in cyberspace.
Germany
Berlin starts National Cyber Defense Initiative: On 16 June 2011, the German Minister for Home Affairs, officially opened the new German NCAZ (National Center for Cyber Defense) Nationales Cyber-Abwehrzentrum located in Bonn. The NCAZ closely cooperates with BSI (Federal Office for Information Security) Bundesamt für Sicherheit in der Informationstechnik, BKA (Federal Police Organisation) Bundeskriminalamt (Deutschland), BND (Federal Intelligence Service) Bundesnachrichtendienst, MAD (Military Intelligence Service) Amt für den Militärischen Abschirmdienst and other national organizations in Germany taking care of national security aspects. According to the Minister, the primary task of the new organization founded on 23 February 2011, is to detect and prevent attacks against the national infrastructure and mentioned incidents like Stuxnet. Germany has also established the largest research institution for IT security in Europe, the Center for Research in Security and Privacy (CRISP) in Darmstadt.
India
Some provisions for cybersecurity have been incorporated into rules framed under the Information Technology Act 2000.
The National Cyber Security Policy 2013 is a policy framework by Ministry of Electronics and Information Technology (MeitY) which aims to protect the public and private infrastructure from cyberattacks, and safeguard "information, such as personal information (of web users), financial and banking information and sovereign data". CERT- In is the nodal agency which monitors the cyber threats in the country. The post of National Cyber Security Coordinator has also been created in the Prime Minister's Office (PMO).
The Indian Companies Act 2013 has also introduced cyber law and cybersecurity obligations on the part of Indian directors. Some provisions for cybersecurity have been incorporated into rules framed under the Information Technology Act 2000 Update in 2013.
South Korea
Following cyber attacks in the first half of 2013, when the government, news media, television station, and bank websites were compromised, the national government committed to the training of 5,000 new cybersecurity experts by 2017. The South Korean government blamed its northern counterpart for these attacks, as well as incidents that occurred in 2009, 2011, and 2012, but Pyongyang denies the accusations.
United States
Legislation
The 1986 , the Computer Fraud and Abuse Act is the key legislation. It prohibits unauthorized access or damage of "protected computers" as defined in . Although various other measures have been proposed – none has succeeded.
In 2013, executive order 13636 Improving Critical Infrastructure Cybersecurity was signed, which prompted the creation of the NIST Cybersecurity Framework.
In response to the Colonial Pipeline ransomware attack President Joe Biden signed Executive Order 14028 on May 12, 2021, to increase software security standards for sales to the government, tighten detection and security on existing systems, improve information sharing and training, establish a Cyber Safety Review Board, and improve incident response.
Standardized government testing services
The General Services Administration (GSA) has standardized the "penetration test" service as a pre-vetted support service, to rapidly address potential vulnerabilities, and stop adversaries before they impact US federal, state and local governments. These services are commonly referred to as Highly Adaptive Cybersecurity Services (HACS).
Agencies
The Department of Homeland Security has a dedicated division responsible for the response system, risk management program and requirements for cybersecurity in the United States called the National Cyber Security Division. The division is home to US-CERT operations and the National Cyber Alert System. The National Cybersecurity and Communications Integration Center brings together government organizations responsible for protecting computer networks and networked infrastructure.
The third priority of the Federal Bureau of Investigation (FBI) is to: "Protect the United States against cyber-based attacks and high-technology crimes", and they, along with the National White Collar Crime Center (NW3C), and the Bureau of Justice Assistance (BJA) are part of the multi-agency task force, The Internet Crime Complaint Center, also known as IC3.
In addition to its own specific duties, the FBI participates alongside non-profit organizations such as InfraGard.
The Computer Crime and Intellectual Property Section (CCIPS) operates in the United States Department of Justice Criminal Division. The CCIPS is in charge of investigating computer crime and intellectual property crime and is specialized in the search and seizure of digital evidence in computers and networks. In 2017, CCIPS published A Framework for a Vulnerability Disclosure Program for Online Systems to help organizations "clearly describe authorized vulnerability disclosure and discovery conduct, thereby substantially reducing the likelihood that such described activities will result in a civil or criminal violation of law under the Computer Fraud and Abuse Act (18 U.S.C. § 1030)."
The United States Cyber Command, also known as USCYBERCOM, "has the mission to direct, synchronize, and coordinate cyberspace planning and operations to defend and advance national interests in collaboration with domestic and international partners." It has no role in the protection of civilian networks.
The U.S. Federal Communications Commission's role in cybersecurity is to strengthen the protection of critical communications infrastructure, to assist in maintaining the reliability of networks during disasters, to aid in swift recovery after, and to ensure that first responders have access to effective communications services.
The Food and Drug Administration has issued guidance for medical devices, and the National Highway Traffic Safety Administration is concerned with automotive cybersecurity. After being criticized by the Government Accountability Office, and following successful attacks on airports and claimed attacks on airplanes, the Federal Aviation Administration has devoted funding to securing systems on board the planes of private manufacturers, and the Aircraft Communications Addressing and Reporting System. Concerns have also been raised about the future Next Generation Air Transportation System.
Computer emergency readiness team
"Computer emergency response team" is a name given to expert groups that handle computer security incidents. In the US, two distinct organization exist, although they do work closely together.
US-CERT: part of the National Cyber Security Division of the United States Department of Homeland Security.
CERT/CC: created by the Defense Advanced Research Projects Agency (DARPA) and run by the Software Engineering Institute (SEI).
Modern warfare
There is growing concern that cyberspace will become the next theater of warfare. As Mark Clayton from The Christian Science Monitor wrote in a 2015 article titled "The New Cyber Arms Race":
In the future, wars will not just be fought by soldiers with guns or with planes that drop bombs. They will also be fought with the click of a mouse a half a world away that unleashes carefully weaponized computer programs that disrupt or destroy critical industries like utilities, transportation, communications, and energy. Such attacks could also disable military networks that control the movement of troops, the path of jet fighters, the command and control of warships.
This has led to new terms such as cyberwarfare and cyberterrorism. The United States Cyber Command was created in 2009 and many other countries have similar forces.
There are a few critical voices that question whether cybersecurity is as significant a threat as it is made out to be.
Careers
Cybersecurity is a fast-growing field of IT concerned with reducing organizations' risk of hack or data breach. According to research from the Enterprise Strategy Group, 46% of organizations say that they have a "problematic shortage" of cybersecurity skills in 2016, up from 28% in 2015. Commercial, government and non-governmental organizations all employ cybersecurity professionals. The fastest increases in demand for cybersecurity workers are in industries managing increasing volumes of consumer data such as finance, health care, and retail. However, the use of the term "cybersecurity" is more prevalent in government job descriptions.
Typical cybersecurity job titles and descriptions include:
Security analyst
Analyzes and assesses vulnerabilities in the infrastructure (software, hardware, networks), investigates using available tools and countermeasures to remedy the detected vulnerabilities and recommends solutions and best practices. Analyzes and assesses damage to the data/infrastructure as a result of security incidents, examines available recovery tools and processes, and recommends solutions. Tests for compliance with security policies and procedures. May assist in the creation, implementation, or management of security solutions.
Security engineer
Performs security monitoring, security and data/logs analysis, and forensic analysis, to detect security incidents, and mounts the incident response. Investigates and utilizes new technologies and processes to enhance security capabilities and implement improvements. May also review code or perform other security engineering methodologies.
Security architect
Designs a security system or major components of a security system, and may head a security design team building a new security system.
Security administrator
Installs and manages organization-wide security systems. This position may also include taking on some of the tasks of a security analyst in smaller organizations.
Chief Information Security Officer (CISO)
A high-level management position responsible for the entire information security division/staff. The position may include hands-on technical work.
Chief Security Officer (CSO)
A high-level management position responsible for the entire security division/staff. A newer position now deemed needed as security risks grow.
Data Protection Officer (DPO)
A DPO is tasked with monitoring compliance with the UK GDPR and other data protection laws, our data protection policies, awareness-raising, training, and audits.
Security Consultant/Specialist/Intelligence
Broad titles that encompass any one or all of the other roles or titles tasked with protecting computers, networks, software, data or information systems against viruses, worms, spyware, malware, intrusion detection, unauthorized access, denial-of-service attacks, and an ever-increasing list of attacks by hackers acting as individuals or as part of organized crime or foreign governments.
Student programs are also available for people interested in beginning a career in cybersecurity. Meanwhile, a flexible and effective option for information security professionals of all experience levels to keep studying is online security training, including webcasts. A wide range of certified courses are also available.
In the United Kingdom, a nationwide set of cybersecurity forums, known as the U.K Cyber Security Forum, were established supported by the Government's cybersecurity strategy in order to encourage start-ups and innovation and to address the skills gap identified by the U.K Government.
In Singapore, the Cyber Security Agency has issued a Singapore Operational Technology (OT) Cybersecurity Competency Framework (OTCCF). The framework defines emerging cybersecurity roles in Operational Technology. The OTCCF was endorsed by the Infocomm Media Development Authority (IMDA). It outlines the different OT cybersecurity job positions as well as the technical skills and core competencies necessary. It also depicts the many career paths available, including vertical and lateral advancement opportunities.
Terminology
The following terms used with regards to computer security are explained below:
Access authorization restricts access to a computer to a group of users through the use of authentication systems. These systems can protect either the whole computer, such as through an interactive login screen, or individual services, such as a FTP server. There are many methods for identifying and authenticating users, such as passwords, identification cards, smart cards, and biometric systems.
Anti-virus software consists of computer programs that attempt to identify, thwart, and eliminate computer viruses and other malicious software (malware).
Applications are executable code, so general practice is to disallow users the power to install them; to install only those which are known to be reputable – and to reduce the attack surface by installing as few as possible. They are typically run with least privilege, with a robust process in place to identify, test and install any released security patches or updates for them.
Authentication techniques can be used to ensure that communication end-points are who they say they are.
Automated theorem proving and other verification tools can be used to enable critical algorithms and code used in secure systems to be mathematically proven to meet their specifications.
Backups are one or more copies kept of important computer files. Typically, multiple copies will be kept at different locations so that if a copy is stolen or damaged, other copies will still exist.
Capability and access control list techniques can be used to ensure privilege separation and mandatory access control. Capabilities vs. ACLs discusses their use.
Chain of trust techniques can be used to attempt to ensure that all software loaded has been certified as authentic by the system's designers.
Confidentiality is the nondisclosure of information except to another authorized person.
Cryptographic techniques can be used to defend data in transit between systems, reducing the probability that the data exchange between systems can be intercepted or modified.
Cyberwarfare is an Internet-based conflict that involves politically motivated attacks on information and information systems. Such attacks can, for example, disable official websites and networks, disrupt or disable essential services, steal or alter classified data, and cripple financial systems.
Data integrity is the accuracy and consistency of stored data, indicated by an absence of any alteration in data between two updates of a data record.
Encryption is used to protect the confidentiality of a message. Cryptographically secure ciphers are designed to make any practical attempt of breaking them infeasible. Symmetric-key ciphers are suitable for bulk encryption using shared keys, and public-key encryption using digital certificates can provide a practical solution for the problem of securely communicating when no key is shared in advance.
Endpoint security software aids networks in preventing malware infection and data theft at network entry points made vulnerable by the prevalence of potentially infected devices such as laptops, mobile devices, and USB drives.
Firewalls serve as a gatekeeper system between networks, allowing only traffic that matches defined rules. They often include detailed logging, and may include intrusion detection and intrusion prevention features. They are near-universal between company local area networks and the Internet, but can also be used internally to impose traffic rules between networks if network segmentation is configured.
A hacker is someone who seeks to breach defenses and exploit weaknesses in a computer system or network.
Honey pots are computers that are intentionally left vulnerable to attack by crackers. They can be used to catch crackers and to identify their techniques.
Intrusion-detection systems are devices or software applications that monitor networks or systems for malicious activity or policy violations.
A microkernel is an approach to operating system design which has only the near-minimum amount of code running at the most privileged level – and runs other elements of the operating system such as device drivers, protocol stacks and file systems, in the safer, less privileged user space.
Pinging. The standard "ping" application can be used to test if an IP address is in use. If it is, attackers may then try a port scan to detect which services are exposed.
A port scan is used to probe an IP address for open ports to identify accessible network services and applications.
A key logger is spyware which silently captures and stores each keystroke that a user types on the computer's keyboard.
Social engineering is the use of deception to manipulate individuals to breach security.
Logic bombs is a type of malware added to a legitimate program that lies dormant until it is triggered by a specific event.
Zero trust security means that no one is trusted by default from inside or outside the network, and verification is required from everyone trying to gain access to resources on the network.
Notable scholars
See also
References
Further reading
Jeremy Bob, Yonah (2021) "Ex-IDF cyber intel. official reveals secrets behind cyber offense". The Jerusalem Post
Branch, J. (2020). "What's in a Name? Metaphors and Cybersecurity." International Organization.
Fuller, Christopher J. "The Roots of the United States’ Cyber (In)Security," Diplomatic History 43:1 (2019): 157–185. online
Montagnani, Maria Lillà and Cavallo, Mirta Antonella (26 July 2018). "Cybersecurity and Liability in a Big Data World". SSRN.
M. Shariati et al. / Procedia Computer Science 3 (2011) 537–543. Enterprise information security, a review of architectures and frameworks from interoperability perspective
External links
Computer security
Cryptography
Cyberwarfare
Data protection
Information governance
Malware |
7829 | https://en.wikipedia.org/wiki/Chaos%20Computer%20Club | Chaos Computer Club | The Chaos Computer Club (CCC) is Europe's largest association of hackers with registered members. Founded in 1981, the association is incorporated as an eingetragener Verein in Germany, with local chapters (called Erfa-Kreise) in various cities in Germany and the surrounding countries, particularly where there are German-speaking communities.
Since 1985, some chapters in Switzerland have organized an independent sister association called the (CCC-CH) instead.
The CCC describes itself as "a galactic community of life forms, independent of age, sex, race or societal orientation, which strives across borders for freedom of information…". In general, the CCC advocates more transparency in government, freedom of information, and the human right to communication. Supporting the principles of the hacker ethic, the club also fights for free universal access to computers and technological infrastructure as well as the use of open-source software. The CCC spreads an entrepreneurial vision refusing capitalist control. It has been characterised as "…one of the most influential digital organisations anywhere, the centre of German digital culture, hacker culture, hacktivism, and the intersection of any discussion of democratic and digital rights".
Members of the CCC have demonstrated and publicized a number of important information security problems.
The CCC frequently criticizes new legislation and products with weak information security which endanger citizen rights or the privacy of users.
Notable members of the CCC regularly function as expert witnesses for the German constitutional court, organize lawsuits and campaigns, or otherwise influence the political process.
Activities
Regular events
The CCC hosts the annual Chaos Communication Congress, Europe's biggest hacker gathering.
When the event was held in the Hamburg congress center in 2013, it drew guests.
For the 2016 installment, guests were expected, with additional viewers following the event via live streaming.
Every four years, the Chaos Communication Camp is the outdoor alternative for hackers worldwide.
The CCC also held, from 2009 to 2013, a yearly conference called SIGINT in Cologne which focused on the impact of digitisation on society. The SIGINT conference was discontinued in 2014.
The four-day conference in Karlsruhe is with more than 1500 participants the second largest annual event.
Another yearly CCC event taking place on the Easter weekend is the Easterhegg, which is more workshop oriented than the other events.
The CCC often uses the c-base station located in Berlin as an event location or as function rooms.
Publications, Outreach
The CCC publishes the irregular magazine Datenschleuder (data slingshot) since 1984.
The Berlin chapter produces a monthly radio show called which picks up various technical and political topics in a two-hour talk radio show. The program is aired on a local radio station called and on the internet.
Other programs have emerged in the context of Chaosradio, including radio programs offered by some regional Chaos Groups and the podcast spin-off CRE by Tim Pritlove.
Many of the chapters of CCC participate in the volunteer project Chaos macht Schule which supports teaching in local schools. Its aims are to improve technology and media literacy of pupils, parents, and teachers.
CCC members are present in big tech companies and in administrative instances. One of the spokespersons of the CCC, as of 1986, Andy Müller-Maguhn, was a member of the executive committee of the ICANN (Internet Corporation for Assigned Names and Numbers) between 2000 and 2002.
CryptoParty
The CCC sensitises and introduces people to the questions of data privacy. Some of its local chapters support or organize so called CryptoParties to introduce people to the basics of practical cryptography and internet anonymity.
History
Founding
The CCC was founded in West Berlin on 12 September 1981 at a table which had previously belonged to the Kommune 1 in the rooms of the newspaper Die Tageszeitung by Wau Holland and others in anticipation of the prominent role that information technology would play in the way people live and communicate.
BTX-Hack
The CCC became world-famous in 1984 when they drew public attention to the security flaws of the German Bildschirmtext computer network by causing it to debit DM in a Hamburg bank in favor of the club. The money was returned the next day in front of the press. Prior to the incident, the system provider had failed to react to proof of the security flaw provided by the CCC, claiming to the public that their system was safe. Bildschirmtext was the biggest commercially available online system targeted at the general public in its region at that time, run and heavily advertised by the German telecommunications agency Deutsche Bundespost which also strove to keep up-to-date alternatives out of the market.
Karl Koch
In 1987, the CCC was peripherally involved in the first cyberespionage case to make international headlines. A group of German hackers led by Karl Koch, who was loosely affiliated with the CCC, was arrested for breaking into US government and corporate computers, and then selling operating-system source code to the Soviet KGB.
This incident was portrayed in the movie 23.
GSM-Hack
In April 1998, the CCC successfully demonstrated the cloning of a GSM customer card, breaking the COMP128 encryption algorithm used at that time by many GSM SIMs.
Project Blinkenlights
In 2001, the CCC celebrated its twentieth birthday with an interactive light installation dubbed Project Blinkenlights that turned the building Haus des Lehrers in Berlin into a giant computer screen. A follow up installation, Arcade, was created in 2002 by the CCC for the Bibliothèque nationale de France. Later in October 2008 CCC's Project Blinkenlights went to Toronto, Ontario, Canada with project Stereoscope.
Schäuble fingerprints
In March 2008, the CCC acquired and published the fingerprints of German Minister of the Interior Wolfgang Schäuble. The magazine also included the fingerprint on a film that readers could use to fool fingerprint readers. This was done to protest the use of biometric data in German identity devices such as e-passports.
Staatstrojaner affair
The Staatstrojaner (Federal Trojan horse) is a computer surveillance program installed secretly on a suspect's computer, which the German police uses to wiretap Internet telephony. This "source wiretapping" is the only feasible way to wiretap in this case, since Internet telephony programs will usually encrypt the data when it leaves the computer. The Federal Constitutional Court of Germany has ruled that the police may only use such programs for telephony wiretapping, and for no other purpose, and that this restriction should be enforced through technical and legal means.
On 8 October 2011, the CCC published an analysis of the Staatstrojaner software. The software was found to have the ability to remote control the target computer, to capture screenshots, and to fetch and run arbitrary extra code. The CCC says that having this functionality built in is in direct contradiction to the ruling of the constitutional court.
In addition, there were a number of security problems with the implementation. The software was controllable over the Internet, but the commands were sent completely unencrypted, with no checks for authentication or integrity. This leaves any computer under surveillance using this software vulnerable to attack. The captured screenshots and audio files were encrypted, but so incompetently that the encryption was ineffective. All captured data was sent over a proxy server in the United States, which is problematic since the data is then temporarily outside the German jurisdiction.
The CCC's findings were widely reported in the German press. This trojan has also been nicknamed R2-D2 because the string "C3PO-r2d2-POE" was found in its code; another alias for it is 0zapftis ("It's tapped!" in Bavarian, a sardonic reference to Oktoberfest). According to a Sophos analysis, the trojan's behavior matches that described in a confidential memo between the German Landeskriminalamt and a software firm called ; the memo was leaked on WikiLeaks in 2008. Among other correlations is the dropper's file name , short for Skype Capture Unit Installer. The 64-bit Windows version installs a digitally signed driver, but signed by the non-existing certificate authority "Goose Cert". DigiTask later admitted selling spy software to governments.
The Federal Ministry of the Interior released a statement in which they denied that R2-D2 has been used by the Federal Criminal Police Office (BKA); this statement however does not eliminate the possibility that it has been used by state-level German police forces. The BKA had previously announced however (in 2007) that they had somewhat similar trojan software that can inspect a computer's hard drive.
Domscheit-Berg affair
Former WikiLeaks spokesman Daniel Domscheit-Berg was expelled from the national CCC (but not the Berlin chapter) in August 2011. This decision was revoked in February 2012.
As a result of his role in the expulsion, board member Andy Müller-Maguhn was not reelected for another term.
Phone authentication systems
The CCC has repeatedly warned phone users of the weakness of biometric identification in the wake of the 2008 Schäuble fingerprints affair. In their "hacker ethics" the CCC includes "protect people data", but also "Computers can change your life for the better". The club regards privacy as an individual right: the CCC does not discourage people from sharing or storing personal information on their phones, but advocates better privacy protection, and the use of specific browsing and sharing techniques by users.
Apple TouchID
From a photograph of the user's fingerprint on a glass surface, using "easy everyday means", the biometrics hacking team of the CCC was able to unlock an iPhone 5S.
Samsung S8 iris recognition
The Samsung Galaxy S8's iris recognition system claims to be "one of the safest ways to keep your phone locked and the contents private" as "patterns in your irises are unique to you and are virtually impossible to replicate", as quoted in official Samsung content. However, in some cases, using a high resolution photograph of the phone owner's iris and a lens, the CCC claimed to be able to trick the authentication system.
Fake Chaos Computer Club France
The Chaos Computer Club France (CCCF) was a fake hacker organisation created in 1989 in Lyon (France) by Jean-Bernard Condat, under the command of Jean-Luc Delacour, an agent of the Direction de la surveillance du territoire governmental agency. The primary goal of the CCCF was to watch and to gather information about the French hacker community, identifying the hackers who could harm the country. Journalist said that this organization also worked with the French National Gendarmerie.
The CCCF had an electronic magazine called Chaos Digest (ChaosD). Between 4 January 1993 and 5 August 1993, seventy-three issues were published ().
See also
23 (film)
c-base
Chaos Communication Congress
Chaosdorf, the local chapter of the Chaos Computer Club at Düsseldorf
Datenschleuder
Digitalcourage
Digital identity
Hacker culture
Information privacy
Netzpolitik.org
Project Blinkenlights
Security hacker
Tron (hacker)
Wau Holland Foundation
References
Further reading
Chaos Computer Club hackers 'have a conscience', BBC News, 2011-02-11
External links
CCC Events Blog
Chaosradio Podcast Network
Computer clubs in Germany
Hacker groups
Organisations based in Hamburg |
7903 | https://en.wikipedia.org/wiki/Diffie%E2%80%93Hellman%20key%20exchange | Diffie–Hellman key exchange | Diffie–Hellman key exchange is a method of securely exchanging cryptographic keys over a public channel and was one of the first public-key protocols as conceived by Ralph Merkle and named after Whitfield Diffie and Martin Hellman. DH is one of the earliest practical examples of public key exchange implemented within the field of cryptography. Published in 1976 by Diffie and Hellman, this is the earliest publicly known work that proposed the idea of a private key and a corresponding public key.
Traditionally, secure encrypted communication between two parties required that they first exchange keys by some secure physical means, such as paper key lists transported by a trusted courier. The Diffie–Hellman key exchange method allows two parties that have no prior knowledge of each other to jointly establish a shared secret key over an insecure channel. This key can then be used to encrypt subsequent communications using a symmetric-key cipher.
Diffie–Hellman is used to secure a variety of Internet services. However, research published in October 2015 suggests that the parameters in use for many DH Internet applications at that time are not strong enough to prevent compromise by very well-funded attackers, such as the security services of some countries.
The scheme was published by Whitfield Diffie and Martin Hellman in 1976, but in 1997 it was revealed that James H. Ellis, Clifford Cocks, and Malcolm J. Williamson of GCHQ, the British signals intelligence agency, had previously shown in 1969 how public-key cryptography could be achieved.
Although Diffie–Hellman key agreement itself is a non-authenticated key-agreement protocol, it provides the basis for a variety of authenticated protocols, and is used to provide forward secrecy in Transport Layer Security's ephemeral modes (referred to as EDH or DHE depending on the cipher suite).
The method was followed shortly afterwards by RSA, an implementation of public-key cryptography using asymmetric algorithms.
Expired from 1977 describes the now public-domain algorithm. It credits Hellman, Diffie, and Merkle as inventors.
Name
In 2002, Hellman suggested the algorithm be called Diffie–Hellman–Merkle key exchange in recognition of Ralph Merkle's contribution to the invention of public-key cryptography (Hellman, 2002), writing:
Description
General overview
Diffie–Hellman key exchange establishes a shared secret between two parties that can be used for secret communication for exchanging data over a public network. An analogy illustrates the concept of public key exchange by using colors instead of very large numbers:
The process begins by having the two parties, Alice and Bob, publicly agree on an arbitrary starting color that does not need to be kept secret (but should be different every time). In this example, the color is yellow. Each person also selects a secret color that they keep to themselves – in this case, red and blue-green. The crucial part of the process is that Alice and Bob each mix their own secret color together with their mutually shared color, resulting in orange-tan and light-blue mixtures respectively, and then publicly exchange the two mixed colors. Finally, each of them mixes the color they received from the partner with their own private color. The result is a final color mixture (yellow-brown in this case) that is identical to their partner's final color mixture.
If a third party listened to the exchange, it would only know the common color (yellow) and the first mixed colors (orange-tan and light-blue), but it would be difficult for this party to determine the final secret color (yellow-brown). Bringing the analogy back to a real-life exchange using large numbers rather than colors, this determination is computationally expensive. It is impossible to compute in a practical amount of time even for modern supercomputers.
Cryptographic explanation
The simplest and the original implementation of the protocol uses the multiplicative group of integers modulo p, where p is prime, and g is a primitive root modulo p. These two values are chosen in this way to ensure that the resulting shared secret can take on any value from 1 to p–1. Here is an example of the protocol, with non-secret values in blue, and secret values in red.
Alice and Bob publicly agree to use a modulus p = 23 and base g = 5 (which is a primitive root modulo 23).
Alice chooses a secret integer a = 4, then sends Bob A = ga mod p
A = 54 mod 23 = 4
Bob chooses a secret integer b = 3, then sends Alice B = gb mod p
B = 53 mod 23 = 10
Alice computes s = Ba mod p
s = 104 mod 23 = 18
Bob computes s = Ab mod p
s = 43 mod 23 = 18
Alice and Bob now share a secret (the number 18).
Both Alice and Bob have arrived at the same values because under mod p,
More specifically,
Only a and b are kept secret. All the other values – p, g, ga mod p, and gb mod p – are sent in the clear. The strength of the scheme comes from the fact that gab mod p = gba mod p take extremely long times to compute by any known algorithm just from the knowledge of p, g, ga mod p, and gb mod p. Once Alice and Bob compute the shared secret they can use it as an encryption key, known only to them, for sending messages across the same open communications channel.
Of course, much larger values of a, b, and p would be needed to make this example secure, since there are only 23 possible results of n mod 23. However, if p is a prime of at least 600 digits, then even the fastest modern computers using the fastest known algorithm cannot find a given only g, p and ga mod p. Such a problem is called the discrete logarithm problem. The computation of ga mod p is known as modular exponentiation and can be done efficiently even for large numbers.
Note that g need not be large at all, and in practice is usually a small integer (like 2, 3, ...).
Secrecy chart
The chart below depicts who knows what, again with non-secret values in blue, and secret values in red. Here Eve is an eavesdropper – she watches what is sent between Alice and Bob, but she does not alter the contents of their communications.
g = public (prime) base, known to Alice, Bob, and Eve. g = 5
p = public (prime) modulus, known to Alice, Bob, and Eve. p = 23
a = Alice's private key, known only to Alice. a = 6
b = Bob's private key known only to Bob. b = 15
A = Alice's public key, known to Alice, Bob, and Eve. A = ga mod p = 8
B = Bob's public key, known to Alice, Bob, and Eve. B = gb mod p = 19
Now s is the shared secret key and it is known to both Alice and Bob, but not to Eve. Note that it is not helpful for Eve to compute AB, which equals ga + b mod p.
Note: It should be difficult for Alice to solve for Bob's private key or for Bob to solve for Alice's private key. If it is not difficult for Alice to solve for Bob's private key (or vice versa), Eve may simply substitute her own private / public key pair, plug Bob's public key into her private key, produce a fake shared secret key, and solve for Bob's private key (and use that to solve for the shared secret key. Eve may attempt to choose a public / private key pair that will make it easy for her to solve for Bob's private key).
Another demonstration of Diffie–Hellman (also using numbers too small for practical use) is given here.
Generalization to finite cyclic groups
Here is a more general description of the protocol:
Alice and Bob agree on a finite cyclic group G of order n and a generating element g in G. (This is usually done long before the rest of the protocol; g is assumed to be known by all attackers.) The group G is written multiplicatively.
Alice picks a random natural number a with 1 < a < n, and sends the element ga of G to Bob.
Bob picks a random natural number b with 1 < b < n, and sends the element gb of G to Alice.
Alice computes the element (gb)a = gba of G.
Bob computes the element (ga)b = gab of G.
Both Alice and Bob are now in possession of the group element gab = gba, which can serve as the shared secret key. The group G satisfies the requisite condition for secure communication if there is not an efficient algorithm for determining gab given g, ga, and gb.
For example, the elliptic curve Diffie–Hellman protocol is a variant that represents an element of G as a point on an elliptic curve instead of as an integer modulo n. Variants using hyperelliptic curves have also been proposed. The supersingular isogeny key exchange is a Diffie–Hellman variant that has been designed to be secure against quantum computers.
Ephemeral and/or Static Keys
The used keys can either be ephemeral or static (long term) key, but could even be mixed, so called semi-static DH. These variants have different properties and hence different use cases. An overview over many variants and some also discussions can for example be found in NIST SP 800-56A. Here just a basic list:
ephemeral, ephemeral: Usually used for key agreement. Provides forward secrecy, but no authenticity.
static, static: Would generate a long term shared secret. Does not provide forward secrecy, but implicit authenticity. Since the keys are static it would for example not protect against replay-attacks.
ephemeral, static: For example used in ElGamal encryption or Integrated Encryption Scheme (IES). If used in key agreement it could provide implicit one-sided authenticity (the ephemeral side could verify the authenticity of the static side). No forward secrecy is provided.
It is possible to use ephemeral and static keys in one key agreement to provide more security as for example shown in NIST SP 800-56A, but it is also possible to combine those in a single DH key exchange, which is then called triple DH (3-DH).
Triple Diffie-Hellman (3-DH)
In 1997 a kind of double DH was proposed by Simon Blake-Wilson, Don Johnson, Alfred Menezes in "Key Agreement Protocols and their Security Analysis (1997)", which was improved by C. Kudla and K. G. Paterson in “Modular Security Proofs for Key Agreement Protocols (2005)” and shown to be secure. It's also used or mentioned in other variants. For example:
Extended Triple Diffie-Hellman
sci.crypt news group (from 18.08.2002)
Double Ratchet Algorithm
Signal Protocol
The long term secret keys of Alice and Bob are denoted by a and b respectively, with public keys A and B, as well as the ephemeral key pairs x, X and y, Y. Then protocol is:
The long term public keys need to be transferred somehow. That can be done beforehand in a separate, trusted channel, or the public keys can be encrypted using some partial key agreement to preserve anonymity. For more of such details as well as other improvements like side channel protection or explicit key confirmation, as well as early messages and additional password authentication, one could e.g. have a look at "Advanced modular handshake for key agreement and optional authentication"
Operation with more than two parties
Diffie–Hellman key agreement is not limited to negotiating a key shared by only two participants. Any number of users can take part in an agreement by performing iterations of the agreement protocol and exchanging intermediate data (which does not itself need to be kept secret). For example, Alice, Bob, and Carol could participate in a Diffie–Hellman agreement as follows, with all operations taken to be modulo p:
The parties agree on the algorithm parameters p and g.
The parties generate their private keys, named a, b, and c.
Alice computes and sends it to Bob.
Bob computes () = and sends it to Carol.
Carol computes () = and uses it as her secret.
Bob computes gb and sends it to Carol.
Carol computes () = and sends it to Alice.
Alice computes () = = and uses it as her secret.
Carol computes and sends it to Alice.
Alice computes () = and sends it to Bob.
Bob computes () = = and uses it as his secret.
An eavesdropper has been able to see , , , , , and , but cannot use any combination of these to efficiently reproduce .
To extend this mechanism to larger groups, two basic principles must be followed:
Starting with an "empty" key consisting only of g, the secret is made by raising the current value to every participant's private exponent once, in any order (the first such exponentiation yields the participant's own public key).
Any intermediate value (having up to N-1 exponents applied, where N is the number of participants in the group) may be revealed publicly, but the final value (having had all N exponents applied) constitutes the shared secret and hence must never be revealed publicly. Thus, each user must obtain their copy of the secret by applying their own private key last (otherwise there would be no way for the last contributor to communicate the final key to its recipient, as that last contributor would have turned the key into the very secret the group wished to protect).
These principles leave open various options for choosing in which order participants contribute to keys. The simplest and most obvious solution is to arrange the N participants in a circle and have N keys rotate around the circle, until eventually every key has been contributed to by all N participants (ending with its owner) and each participant has contributed to N keys (ending with their own). However, this requires that every participant perform N modular exponentiations.
By choosing a more optimal order, and relying on the fact that keys can be duplicated, it is possible to reduce the number of modular exponentiations performed by each participant to using a divide-and-conquer-style approach, given here for eight participants:
Participants A, B, C, and D each perform one exponentiation, yielding ; this value is sent to E, F, G, and H. In return, participants A, B, C, and D receive .
Participants A and B each perform one exponentiation, yielding , which they send to C and D, while C and D do the same, yielding , which they send to A and B.
Participant A performs an exponentiation, yielding , which it sends to B; similarly, B sends to A. C and D do similarly.
Participant A performs one final exponentiation, yielding the secret = , while B does the same to get = ; again, C and D do similarly.
Participants E through H simultaneously perform the same operations using as their starting point.
Once this operation has been completed all participants will possess the secret , but each participant will have performed only four modular exponentiations, rather than the eight implied by a simple circular arrangement.
Security
The protocol is considered secure against eavesdroppers if G and g are chosen properly. In particular, the order of the group G must be large, particularly if the same group is used for large amounts of traffic. The eavesdropper has to solve the Diffie–Hellman problem to obtain gab. This is currently considered difficult for groups whose order is large enough. An efficient algorithm to solve the discrete logarithm problem would make it easy to compute a or b and solve the Diffie–Hellman problem, making this and many other public key cryptosystems insecure. Fields of small characteristic may be less secure.
The order of G should have a large prime factor to prevent use of the Pohlig–Hellman algorithm to obtain a or b. For this reason, a Sophie Germain prime q is sometimes used to calculate , called a safe prime, since the order of G is then only divisible by 2 and q. g is then sometimes chosen to generate the order q subgroup of G, rather than G, so that the Legendre symbol of ga never reveals the low order bit of a. A protocol using such a choice is for example IKEv2.
g is often a small integer such as 2. Because of the random self-reducibility of the discrete logarithm problem a small g is equally secure as any other generator of the same group.
If Alice and Bob use random number generators whose outputs are not completely random and can be predicted to some extent, then it is much easier to eavesdrop.
In the original description, the Diffie–Hellman exchange by itself does not provide authentication of the communicating parties and is thus vulnerable to a man-in-the-middle attack. Mallory (an active attacker executing the man-in-the-middle attack) may establish two distinct key exchanges, one with Alice and the other with Bob, effectively masquerading as Alice to Bob, and vice versa, allowing her to decrypt, then re-encrypt, the messages passed between them. Note that Mallory must continue to be in the middle, actively decrypting and re-encrypting messages every time Alice and Bob communicate. If she is ever absent, her previous presence is then revealed to Alice and Bob. They will know that all of their private conversations had been intercepted and decoded by someone in the channel. In most cases it will not help them get Mallory's private key, even if she used the same key for both exchanges.
A method to authenticate the communicating parties to each other is generally needed to prevent this type of attack. Variants of Diffie–Hellman, such as STS protocol, may be used instead to avoid these types of attacks.
Practical attacks on Internet traffic
The number field sieve algorithm, which is generally the most effective in solving the discrete logarithm problem, consists of four computational steps. The first three steps only depend on the order of the group G, not on the specific number whose finite log is desired. It turns out that much Internet traffic uses one of a handful of groups that are of order 1024 bits or less. By precomputing the first three steps of the number field sieve for the most common groups, an attacker need only carry out the last step, which is much less computationally expensive than the first three steps, to obtain a specific logarithm. The Logjam attack used this vulnerability to compromise a variety of Internet services that allowed the use of groups whose order was a 512-bit prime number, so called export grade. The authors needed several thousand CPU cores for a week to precompute data for a single 512-bit prime. Once that was done, individual logarithms could be solved in about a minute using two 18-core Intel Xeon CPUs.
As estimated by the authors behind the Logjam attack, the much more difficult precomputation needed to solve the discrete log problem for a 1024-bit prime would cost on the order of $100 million, well within the budget of a large national intelligence agency such as the U.S. National Security Agency (NSA). The Logjam authors speculate that precomputation against widely reused 1024-bit DH primes is behind claims in leaked NSA documents that NSA is able to break much of current cryptography.
To avoid these vulnerabilities, the Logjam authors recommend use of elliptic curve cryptography, for which no similar attack is known. Failing that, they recommend that the order, p, of the Diffie–Hellman group should be at least 2048 bits. They estimate that the pre-computation required for a 2048-bit prime is 109 times more difficult than for 1024-bit primes.
Other uses
Encryption
Public key encryption schemes based on the Diffie–Hellman key exchange have been proposed. The first such scheme is the ElGamal encryption. A more modern variant is the Integrated Encryption Scheme.
Forward secrecy
Protocols that achieve forward secrecy generate new key pairs for each session and discard them at the end of the session. The Diffie–Hellman key exchange is a frequent choice for such protocols, because of its fast key generation.
Password-authenticated key agreement
When Alice and Bob share a password, they may use a password-authenticated key agreement (PK) form of Diffie–Hellman to prevent man-in-the-middle attacks. One simple scheme is to compare the hash of s concatenated with the password calculated independently on both ends of channel. A feature of these schemes is that an attacker can only test one specific password on each iteration with the other party, and so the system provides good security with relatively weak passwords. This approach is described in ITU-T Recommendation X.1035, which is used by the G.hn home networking standard.
An example of such a protocol is the Secure Remote Password protocol.
Public key
It is also possible to use Diffie–Hellman as part of a public key infrastructure, allowing Bob to encrypt a message so that only Alice will be able to decrypt it, with no prior communication between them other than Bob having trusted knowledge of Alice's public key. Alice's public key is . To send her a message, Bob chooses a random b and then sends Alice (unencrypted) together with the message encrypted with symmetric key . Only Alice can determine the symmetric key and hence decrypt the message because only she has a (the private key). A pre-shared public key also prevents man-in-the-middle attacks.
In practice, Diffie–Hellman is not used in this way, with RSA being the dominant public key algorithm. This is largely for historical and commercial reasons, namely that RSA Security created a certificate authority for key signing that became Verisign. Diffie–Hellman, as elaborated above, cannot directly be used to sign certificates. However, the ElGamal and DSA signature algorithms are mathematically related to it, as well as MQV, STS and the IKE component of the IPsec protocol suite for securing Internet Protocol communications.
See also
Elliptic-curve Diffie–Hellman key exchange
Supersingular isogeny key exchange
Forward secrecy
Notes
References
General references
The History of Non-Secret Encryption JH Ellis 1987 (28K PDF file) (HTML version)
The First Ten Years of Public-Key Cryptography Whitfield Diffie, Proceedings of the IEEE, vol. 76, no. 5, May 1988, pp: 560–577 (1.9MB PDF file)
Menezes, Alfred; van Oorschot, Paul; Vanstone, Scott (1997). Handbook of Applied Cryptography Boca Raton, Florida: CRC Press. . (Available online)
Singh, Simon (1999) The Code Book: the evolution of secrecy from Mary Queen of Scots to quantum cryptography New York: Doubleday
An Overview of Public Key Cryptography Martin E. Hellman, IEEE Communications Magazine, May 2002, pp. 42–49. (123kB PDF file)
External links
Oral history interview with Martin Hellman, Charles Babbage Institute, University of Minnesota. Leading cryptography scholar Martin Hellman discusses the circumstances and fundamental insights of his invention of public key cryptography with collaborators Whitfield Diffie and Ralph Merkle at Stanford University in the mid-1970s.
– Diffie–Hellman Key Agreement Method. E. Rescorla. June 1999.
– More Modular Exponential (MODP) Diffie–Hellman groups for Internet Key Exchange (IKE). T. Kivinen, M. Kojo, SSH Communications Security. May 2003.
Summary of ANSI X9.42: Agreement of Symmetric Keys Using Discrete Logarithm Cryptography (64K PDF file) (Description of ANSI 9 Standards)
Talk by Martin Hellman in 2007, YouTube video
Crypto dream team Diffie & Hellman wins $1M 2015 Turing Award (a.k.a. "Nobel Prize of Computing")
A Diffie–Hellman demo written in Python3 This demo properly supports very-large key data and enforces the use of prime numbers where required.
Key-agreement protocols
Public-key cryptography |
7978 | https://en.wikipedia.org/wiki/Data%20Encryption%20Standard | Data Encryption Standard | The Data Encryption Standard (DES ) is a symmetric-key algorithm for the encryption of digital data. Although its short key length of 56 bits makes it too insecure for applications, it has been highly influential in the advancement of cryptography.
Developed in the early 1970s at IBM and based on an earlier design by Horst Feistel, the algorithm was submitted to the National Bureau of Standards (NBS) following the agency's invitation to propose a candidate for the protection of sensitive, unclassified electronic government data. In 1976, after consultation with the National Security Agency (NSA), the NBS selected a slightly modified version (strengthened against differential cryptanalysis, but weakened against brute-force attacks), which was published as an official Federal Information Processing Standard (FIPS) for the United States in 1977.
The publication of an NSA-approved encryption standard led to its quick international adoption and widespread academic scrutiny. Controversies arose from classified design elements, a relatively short key length of the symmetric-key block cipher design, and the involvement of the NSA, raising suspicions about a backdoor. The S-boxes that had prompted those suspicions were designed by the NSA to remove a backdoor they secretly knew (differential cryptanalysis). However, the NSA also ensured that the key size was drastically reduced so that they could break the cipher by brute force attack. The intense academic scrutiny the algorithm received over time led to the modern understanding of block ciphers and their cryptanalysis.
DES is insecure due to the relatively short 56-bit key size. In January 1999, distributed.net and the Electronic Frontier Foundation collaborated to publicly break a DES key in 22 hours and 15 minutes (see chronology). There are also some analytical results which demonstrate theoretical weaknesses in the cipher, although they are infeasible in practice. The algorithm is believed to be practically secure in the form of Triple DES, although there are theoretical attacks. This cipher has been superseded by the Advanced Encryption Standard (AES). DES has been withdrawn as a standard by the National Institute of Standards and Technology.
Some documents distinguish between the DES standard and its algorithm, referring to the algorithm as the DEA (Data Encryption Algorithm).
History
The origins of DES date to 1972, when a National Bureau of Standards study of US government computer security identified a need for a government-wide standard for encrypting unclassified, sensitive information.
Around the same time, engineer Mohamed Atalla in 1972 founded Atalla Corporation and developed the first hardware security module (HSM), the so-called "Atalla Box" which was commercialized in 1973. It protected offline devices with a secure PIN generating key, and was a commercial success. Banks and credit card companies were fearful that Atalla would dominate the market, which spurred the development of an international encryption standard. Atalla was an early competitor to IBM in the banking market, and was cited as an influence by IBM employees who worked on the DES standard. The IBM 3624 later adopted a similar PIN verification system to the earlier Atalla system.
On 15 May 1973, after consulting with the NSA, NBS solicited proposals for a cipher that would meet rigorous design criteria. None of the submissions was suitable. A second request was issued on 27 August 1974. This time, IBM submitted a candidate which was deemed acceptable—a cipher developed during the period 1973–1974 based on an earlier algorithm, Horst Feistel's Lucifer cipher. The team at IBM involved in cipher design and analysis included Feistel, Walter Tuchman, Don Coppersmith, Alan Konheim, Carl Meyer, Mike Matyas, Roy Adler, Edna Grossman, Bill Notz, Lynn Smith, and Bryant Tuckerman.
NSA's involvement in the design
On 17 March 1975, the proposed DES was published in the Federal Register. Public comments were requested, and in the following year two open workshops were held to discuss the proposed standard. There was criticism received from public-key cryptography pioneers Martin Hellman and Whitfield Diffie, citing a shortened key length and the mysterious "S-boxes" as evidence of improper interference from the NSA. The suspicion was that the algorithm had been covertly weakened by the intelligence agency so that they—but no one else—could easily read encrypted messages. Alan Konheim (one of the designers of DES) commented, "We sent the S-boxes off to Washington. They came back and were all different." The United States Senate Select Committee on Intelligence reviewed the NSA's actions to determine whether there had been any improper involvement. In the unclassified summary of their findings, published in 1978, the Committee wrote:
However, it also found that
Another member of the DES team, Walter Tuchman, stated "We developed the DES algorithm entirely within IBM using IBMers. The NSA did not dictate a single wire!"
In contrast, a declassified NSA book on cryptologic history states:
and
Some of the suspicions about hidden weaknesses in the S-boxes were allayed in 1990, with the independent discovery and open publication by Eli Biham and Adi Shamir of differential cryptanalysis, a general method for breaking block ciphers. The S-boxes of DES were much more resistant to the attack than if they had been chosen at random, strongly suggesting that IBM knew about the technique in the 1970s. This was indeed the case; in 1994, Don Coppersmith published some of the original design criteria for the S-boxes. According to Steven Levy, IBM Watson researchers discovered differential cryptanalytic attacks in 1974 and were asked by the NSA to keep the technique secret. Coppersmith explains IBM's secrecy decision by saying, "that was because [differential cryptanalysis] can be a very powerful tool, used against many schemes, and there was concern that such information in the public domain could adversely affect national security." Levy quotes Walter Tuchman: "[t]hey asked us to stamp all our documents confidential... We actually put a number on each one and locked them up in safes, because they were considered U.S. government classified. They said do it. So I did it". Bruce Schneier observed that "It took the academic community two decades to figure out that the NSA 'tweaks' actually improved the security of DES."
The algorithm as a standard
Despite the criticisms, DES was approved as a federal standard in November 1976, and published on 15 January 1977 as FIPS PUB 46, authorized for use on all unclassified data. It was subsequently reaffirmed as the standard in 1983, 1988 (revised as FIPS-46-1), 1993 (FIPS-46-2), and again in 1999 (FIPS-46-3), the latter prescribing "Triple DES" (see below). On 26 May 2002, DES was finally superseded by the Advanced Encryption Standard (AES), following a public competition. On 19 May 2005, FIPS 46-3 was officially withdrawn, but NIST has approved Triple DES through the year 2030 for sensitive government information.
The algorithm is also specified in ANSI X3.92 (Today X3 is known as INCITS and ANSI X3.92 as ANSI INCITS 92), NIST SP 800-67 and ISO/IEC 18033-3 (as a component of TDEA).
Another theoretical attack, linear cryptanalysis, was published in 1994, but it was the Electronic Frontier Foundation's DES cracker in 1998 that demonstrated that DES could be attacked very practically, and highlighted the need for a replacement algorithm. These and other methods of cryptanalysis are discussed in more detail later in this article.
The introduction of DES is considered to have been a catalyst for the academic study of cryptography, particularly of methods to crack block ciphers. According to a NIST retrospective about DES,
The DES can be said to have "jump-started" the nonmilitary study and development of encryption algorithms. In the 1970s there were very few cryptographers, except for those in military or intelligence organizations, and little academic study of cryptography. There are now many active academic cryptologists, mathematics departments with strong programs in cryptography, and commercial information security companies and consultants. A generation of cryptanalysts has cut its teeth analyzing (that is, trying to "crack") the DES algorithm. In the words of cryptographer Bruce Schneier, "DES did more to galvanize the field of cryptanalysis than anything else. Now there was an algorithm to study." An astonishing share of the open literature in cryptography in the 1970s and 1980s dealt with the DES, and the DES is the standard against which every symmetric key algorithm since has been compared.
Chronology
Description
DES is the archetypal block cipher—an algorithm that takes a fixed-length string of plaintext bits and transforms it through a series of complicated operations into another ciphertext bitstring of the same length. In the case of DES, the block size is 64 bits. DES also uses a key to customize the transformation, so that decryption can supposedly only be performed by those who know the particular key used to encrypt. The key ostensibly consists of 64 bits; however, only 56 of these are actually used by the algorithm. Eight bits are used solely for checking parity, and are thereafter discarded. Hence the effective key length is 56 bits.
The key is nominally stored or transmitted as 8 bytes, each with odd parity. According to ANSI X3.92-1981 (Now, known as ANSI INCITS 92-1981), section 3.5:
Like other block ciphers, DES by itself is not a secure means of encryption, but must instead be used in a mode of operation. FIPS-81 specifies several modes for use with DES. Further comments on the usage of DES are contained in FIPS-74.
Decryption uses the same structure as encryption, but with the keys used in reverse order. (This has the advantage that the same hardware or software can be used in both directions.)
Overall structure
The algorithm's overall structure is shown in Figure 1: there are 16 identical stages of processing, termed rounds. There is also an initial and final permutation, termed IP and FP, which are inverses (IP "undoes" the action of FP, and vice versa). IP and FP have no cryptographic significance, but were included in order to facilitate loading blocks in and out of mid-1970s 8-bit based hardware.
Before the main rounds, the block is divided into two 32-bit halves and processed alternately; this criss-crossing is known as the Feistel scheme. The Feistel structure ensures that decryption and encryption are very similar processes—the only difference is that the subkeys are applied in the reverse order when decrypting. The rest of the algorithm is identical. This greatly simplifies implementation, particularly in hardware, as there is no need for separate encryption and decryption algorithms.
The ⊕ symbol denotes the exclusive-OR (XOR) operation. The F-function scrambles half a block together with some of the key. The output from the F-function is then combined with the other half of the block, and the halves are swapped before the next round. After the final round, the halves are swapped; this is a feature of the Feistel structure which makes encryption and decryption similar processes.
The Feistel (F) function
The F-function, depicted in Figure 2, operates on half a block (32 bits) at a time and consists of four stages:
Expansion: the 32-bit half-block is expanded to 48 bits using the expansion permutation, denoted E in the diagram, by duplicating half of the bits. The output consists of eight 6-bit (8 × 6 = 48 bits) pieces, each containing a copy of 4 corresponding input bits, plus a copy of the immediately adjacent bit from each of the input pieces to either side.
Key mixing: the result is combined with a subkey using an XOR operation. Sixteen 48-bit subkeys—one for each round—are derived from the main key using the key schedule (described below).
Substitution: after mixing in the subkey, the block is divided into eight 6-bit pieces before processing by the S-boxes, or substitution boxes. Each of the eight S-boxes replaces its six input bits with four output bits according to a non-linear transformation, provided in the form of a lookup table. The S-boxes provide the core of the security of DES—without them, the cipher would be linear, and trivially breakable.
Permutation: finally, the 32 outputs from the S-boxes are rearranged according to a fixed permutation, the P-box. This is designed so that, after permutation, the bits from the output of each S-box in this round are spread across four different S-boxes in the next round.
The alternation of substitution from the S-boxes, and permutation of bits from the P-box and E-expansion provides so-called "confusion and diffusion" respectively, a concept identified by Claude Shannon in the 1940s as a necessary condition for a secure yet practical cipher.
Key schedule
Figure 3 illustrates the key schedule for encryption—the algorithm which generates the subkeys. Initially, 56 bits of the key are selected from the initial 64 by Permuted Choice 1 (PC-1)—the remaining eight bits are either discarded or used as parity check bits. The 56 bits are then divided into two 28-bit halves; each half is thereafter treated separately. In successive rounds, both halves are rotated left by one or two bits (specified for each round), and then 48 subkey bits are selected by Permuted Choice 2 (PC-2)—24 bits from the left half, and 24 from the right. The rotations (denoted by "<<<" in the diagram) mean that a different set of bits is used in each subkey; each bit is used in approximately 14 out of the 16 subkeys.
The key schedule for decryption is similar—the subkeys are in reverse order compared to encryption. Apart from that change, the process is the same as for encryption. The same 28 bits are passed to all rotation boxes.
Security and cryptanalysis
Although more information has been published on the cryptanalysis of DES than any other block cipher, the most practical attack to date is still a brute-force approach. Various minor cryptanalytic properties are known, and three theoretical attacks are possible which, while having a theoretical complexity less than a brute-force attack, require an unrealistic number of known or chosen plaintexts to carry out, and are not a concern in practice.
Brute-force attack
For any cipher, the most basic method of attack is brute force—trying every possible key in turn. The length of the key determines the number of possible keys, and hence the feasibility of this approach. For DES, questions were raised about the adequacy of its key size early on, even before it was adopted as a standard, and it was the small key size, rather than theoretical cryptanalysis, which dictated a need for a replacement algorithm. As a result of discussions involving external consultants including the NSA, the key size was reduced from 128 bits to 56 bits to fit on a single chip.
In academia, various proposals for a DES-cracking machine were advanced. In 1977, Diffie and Hellman proposed a machine costing an estimated US$20 million which could find a DES key in a single day. By 1993, Wiener had proposed a key-search machine costing US$1 million which would find a key within 7 hours. However, none of these early proposals were ever implemented—or, at least, no implementations were publicly acknowledged. The vulnerability of DES was practically demonstrated in the late 1990s. In 1997, RSA Security sponsored a series of contests, offering a $10,000 prize to the first team that broke a message encrypted with DES for the contest. That contest was won by the DESCHALL Project, led by Rocke Verser, Matt Curtin, and Justin Dolske, using idle cycles of thousands of computers across the Internet. The feasibility of cracking DES quickly was demonstrated in 1998 when a custom DES-cracker was built by the Electronic Frontier Foundation (EFF), a cyberspace civil rights group, at the cost of approximately US$250,000 (see EFF DES cracker). Their motivation was to show that DES was breakable in practice as well as in theory: "There are many people who will not believe a truth until they can see it with their own eyes. Showing them a physical machine that can crack DES in a few days is the only way to convince some people that they really cannot trust their security to DES." The machine brute-forced a key in a little more than 2 days' worth of searching.
The next confirmed DES cracker was the COPACOBANA machine built in 2006 by teams of the Universities of Bochum and Kiel, both in Germany. Unlike the EFF machine, COPACOBANA consists of commercially available, reconfigurable integrated circuits. 120 of these field-programmable gate arrays (FPGAs) of type XILINX Spartan-3 1000 run in parallel. They are grouped in 20 DIMM modules, each containing 6 FPGAs. The use of reconfigurable hardware makes the machine applicable to other code breaking tasks as well. One of the more interesting aspects of COPACOBANA is its cost factor. One machine can be built for approximately $10,000. The cost decrease by roughly a factor of 25 over the EFF machine is an example of the continuous improvement of digital hardware—see Moore's law. Adjusting for inflation over 8 years yields an even higher improvement of about 30x. Since 2007, SciEngines GmbH, a spin-off company of the two project partners of COPACOBANA has enhanced and developed successors of COPACOBANA. In 2008 their COPACOBANA RIVYERA reduced the time to break DES to less than one day, using 128 Spartan-3 5000's. SciEngines RIVYERA held the record in brute-force breaking DES, having utilized 128 Spartan-3 5000 FPGAs. Their 256 Spartan-6 LX150 model has further lowered this time.
In 2012, David Hulton and Moxie Marlinspike announced a system with 48 Xilinx Virtex-6 LX240T FPGAs, each FPGA containing 40 fully pipelined DES cores running at 400 MHz, for a total capacity of 768 gigakeys/sec. The system can exhaustively search the entire 56-bit DES key space in about 26 hours and this service is offered for a fee online.
Attacks faster than brute force
There are three attacks known that can break the full 16 rounds of DES with less complexity than a brute-force search: differential cryptanalysis (DC), linear cryptanalysis (LC), and Davies' attack. However, the attacks are theoretical and are generally considered infeasible to mount in practice; these types of attack are sometimes termed certificational weaknesses.
Differential cryptanalysis was rediscovered in the late 1980s by Eli Biham and Adi Shamir; it was known earlier to both IBM and the NSA and kept secret. To break the full 16 rounds, differential cryptanalysis requires 247 chosen plaintexts. DES was designed to be resistant to DC.
Linear cryptanalysis was discovered by Mitsuru Matsui, and needs 243 known plaintexts (Matsui, 1993); the method was implemented (Matsui, 1994), and was the first experimental cryptanalysis of DES to be reported. There is no evidence that DES was tailored to be resistant to this type of attack. A generalization of LC—multiple linear cryptanalysis—was suggested in 1994 (Kaliski and Robshaw), and was further refined by Biryukov and others. (2004); their analysis suggests that multiple linear approximations could be used to reduce the data requirements of the attack by at least a factor of 4 (that is, 241 instead of 243). A similar reduction in data complexity can be obtained in a chosen-plaintext variant of linear cryptanalysis (Knudsen and Mathiassen, 2000). Junod (2001) performed several experiments to determine the actual time complexity of linear cryptanalysis, and reported that it was somewhat faster than predicted, requiring time equivalent to 239–241 DES evaluations.
Improved Davies' attack: while linear and differential cryptanalysis are general techniques and can be applied to a number of schemes, Davies' attack is a specialized technique for DES, first suggested by Donald Davies in the eighties, and improved by Biham and Biryukov (1997). The most powerful form of the attack requires 250 known plaintexts, has a computational complexity of 250, and has a 51% success rate.
There have also been attacks proposed against reduced-round versions of the cipher, that is, versions of DES with fewer than 16 rounds. Such analysis gives an insight into how many rounds are needed for safety, and how much of a "security margin" the full version retains.
Differential-linear cryptanalysis was proposed by Langford and Hellman in 1994, and combines differential and linear cryptanalysis into a single attack. An enhanced version of the attack can break 9-round DES with 215.8 chosen plaintexts and has a 229.2 time complexity (Biham and others, 2002).
Minor cryptanalytic properties
DES exhibits the complementation property, namely that
where is the bitwise complement of denotes encryption with key and denote plaintext and ciphertext blocks respectively. The complementation property means that the work for a brute-force attack could be reduced by a factor of 2 (or a single bit) under a chosen-plaintext assumption. By definition, this property also applies to TDES cipher.
DES also has four so-called weak keys. Encryption (E) and decryption (D) under a weak key have the same effect (see involution):
or equivalently,
There are also six pairs of semi-weak keys. Encryption with one of the pair of semiweak keys, , operates identically to decryption with the other, :
or equivalently,
It is easy enough to avoid the weak and semiweak keys in an implementation, either by testing for them explicitly, or simply by choosing keys randomly; the odds of picking a weak or semiweak key by chance are negligible. The keys are not really any weaker than any other keys anyway, as they do not give an attack any advantage.
DES has also been proved not to be a group, or more precisely, the set (for all possible keys ) under functional composition is not a group, nor "close" to being a group. This was an open question for some time, and if it had been the case, it would have been possible to break DES, and multiple encryption modes such as Triple DES would not increase the security, because repeated encryption (and decryptions) under different keys would be equivalent to encryption under another, single key.
Simplified DES
Simplified DES (SDES) was designed for educational purposes only, to help students learn about modern cryptanalytic techniques.
SDES has similar structure and properties to DES, but has been simplified to make it much easier to perform encryption and decryption by hand with pencil and paper.
Some people feel that learning SDES gives insight into DES and other block ciphers, and insight into various cryptanalytic attacks against them.
Replacement algorithms
Concerns about security and the relatively slow operation of DES in software motivated researchers to propose a variety of alternative block cipher designs, which started to appear in the late 1980s and early 1990s: examples include RC5, Blowfish, IDEA, NewDES, SAFER, CAST5 and FEAL. Most of these designs kept the 64-bit block size of DES, and could act as a "drop-in" replacement, although they typically used a 64-bit or 128-bit key. In the Soviet Union the GOST 28147-89 algorithm was introduced, with a 64-bit block size and a 256-bit key, which was also used in Russia later.
DES itself can be adapted and reused in a more secure scheme. Many former DES users now use Triple DES (TDES) which was described and analysed by one of DES's patentees (see FIPS Pub 46-3); it involves applying DES three times with two (2TDES) or three (3TDES) different keys. TDES is regarded as adequately secure, although it is quite slow. A less computationally expensive alternative is DES-X, which increases the key size by XORing extra key material before and after DES. GDES was a DES variant proposed as a way to speed up encryption, but it was shown to be susceptible to differential cryptanalysis.
On January 2, 1997, NIST announced that they wished to choose a successor to DES. In 2001, after an international competition, NIST selected a new cipher, the Advanced Encryption Standard (AES), as a replacement. The algorithm which was selected as the AES was submitted by its designers under the name Rijndael. Other finalists in the NIST AES competition included RC6, Serpent, MARS, and Twofish.
See also
Brute Force: Cracking the Data Encryption Standard
DES supplementary material
Skipjack (cipher)
Triple DES
Notes
References
(preprint)
Biham, Eli and Shamir, Adi, Differential Cryptanalysis of the Data Encryption Standard, Springer Verlag, 1993. , .
Biham, Eli and Alex Biryukov: An Improvement of Davies' Attack on DES. J. Cryptology 10(3): 195–206 (1997)
Biham, Eli, Orr Dunkelman, Nathan Keller: Enhancing Differential-Linear Cryptanalysis. ASIACRYPT 2002: pp254–266
Biham, Eli: A Fast New DES Implementation in Software
Cracking DES: Secrets of Encryption Research, Wiretap Politics, and Chip Design, Electronic Frontier Foundation
(preprint).
Campbell, Keith W., Michael J. Wiener: DES is not a Group. CRYPTO 1992: pp512–520
Coppersmith, Don. (1994). . IBM Journal of Research and Development, 38(3), 243–250.
Diffie, Whitfield and Martin Hellman, "Exhaustive Cryptanalysis of the NBS Data Encryption Standard" IEEE Computer 10(6), June 1977, pp74–84
Ehrsam and others., Product Block Cipher System for Data Security, , Filed February 24, 1975
Gilmore, John, "Cracking DES: Secrets of Encryption Research, Wiretap Politics and Chip Design", 1998, O'Reilly, .
Junod, Pascal. "On the Complexity of Matsui's Attack." Selected Areas in Cryptography, 2001, pp199–211.
Kaliski, Burton S., Matt Robshaw: Linear Cryptanalysis Using Multiple Approximations. CRYPTO 1994: pp26–39
Knudsen, Lars, John Erik Mathiassen: A Chosen-Plaintext Linear Attack on DES. Fast Software Encryption - FSE 2000: pp262–272
Langford, Susan K., Martin E. Hellman: Differential-Linear Cryptanalysis. CRYPTO 1994: 17–25
Levy, Steven, Crypto: How the Code Rebels Beat the Government—Saving Privacy in the Digital Age, 2001, .
National Bureau of Standards, Data Encryption Standard, FIPS-Pub.46. National Bureau of Standards, U.S. Department of Commerce, Washington D.C., January 1977.
Christof Paar, Jan Pelzl, "The Data Encryption Standard (DES) and Alternatives", free online lectures on Chapter 3 of "Understanding Cryptography, A Textbook for Students and Practitioners". Springer, 2009.
External links
FIPS 46-3: The official document describing the DES standard (PDF)
COPACOBANA, a $10,000 DES cracker based on FPGAs by the Universities of Bochum and Kiel
DES step-by-step presentation and reliable message encoding application
A Fast New DES Implementation in Software - Biham
On Multiple Linear Approximations
RFC4772 : Security Implications of Using the Data Encryption Standard (DES)
Broken block ciphers |
8271 | https://en.wikipedia.org/wiki/Digital%20television | Digital television | Digital television (DTV) is the transmission of television signals using digital encoding, in contrast to the earlier analog television technology which used analog signals. At the time of its development it was considered an innovative advancement and represented the first significant evolution in television technology since color television in the 1950s. Modern digital television is transmitted in high-definition television (HDTV) with greater resolution than analog TV. It typically uses a widescreen aspect ratio (commonly 16:9) in contrast to the narrower format of analog TV. It makes more economical use of scarce radio spectrum space; it can transmit up to seven channels in the same bandwidth as a single analog channel, and provides many new features that analog television cannot. A transition from analog to digital broadcasting began around 2000. Different digital television broadcasting standards have been adopted in different parts of the world; below are the more widely used standards:
Digital Video Broadcasting (DVB) uses coded orthogonal frequency-division multiplexing (OFDM) modulation and supports hierarchical transmission. This standard has been adopted in Europe, Africa, Asia, and Australia, for a total of approximately 60 countries.
Advanced Television System Committee (ATSC) uses eight-level vestigial sideband (8VSB) for terrestrial broadcasting. This standard has been adopted by 9 countries: the United States, Canada, Mexico, South Korea, Bahamas, Jamaica, the Dominican Republic, Haiti and Suriname.
Integrated Services Digital Broadcasting (ISDB) is a system designed to provide good reception to fixed receivers and also portable or mobile receivers. It utilizes OFDM and two-dimensional interleaving. It supports hierarchical transmission of up to three layers and uses MPEG-2 video and Advanced Audio Coding. This standard has been adopted in Japan and the Philippines. ISDB-T International is an adaptation of this standard using H.264/MPEG-4 AVC, which has been adopted in most of South America and Portuguese-speaking African countries.
Digital Terrestrial Multimedia Broadcasting (DTMB) adopts time-domain synchronous (TDS) OFDM technology with a pseudo-random signal frame to serve as the guard interval (GI) of the OFDM block and the training symbol. The DTMB standard has been adopted in China, including Hong Kong and Macau.
Digital Multimedia Broadcasting (DMB) is a digital radio transmission technology developed in South Korea as part of the national IT project for sending multimedia such as TV, radio and datacasting to mobile devices such as mobile phones, laptops and GPS navigation systems.
History
Background
Digital television's roots have been tied very closely to the availability of inexpensive, high performance computers. It was not until the 1990s that digital TV became a real possibility. Digital television was previously not practically feasible due to the impractically high bandwidth requirements of uncompressed digital video, requiring around 200Mbit/s (25MB/s) for a standard-definition television (SDTV) signal, and over 1Gbit/s for high-definition television (HDTV).
Development
In the mid-1980s, Toshiba released a television set with digital capabilities, using integrated circuit chips such as a microprocessor to convert analog television broadcast signals to digital video signals, enabling features such as freezing pictures and showing two channels at once. In 1986, Sony and NEC Home Electronics announced their own similar TV sets with digital video capabilities. However, they still relied on analog TV broadcast signals, with true digital TV broadcasts not yet being available at the time.
A digital TV broadcast service was proposed in 1986 by Nippon Telegraph and Telephone (NTT) and the Ministry of Posts and Telecommunication (MPT) in Japan, where there were plans to develop an "Integrated Network System" service. However, it was not possible to practically implement such a digital TV service until the adoption of discrete cosine transform (DCT) video compression technology made it possible in the early 1990s.
In the mid-1980s, as Japanese consumer electronics firms forged ahead with the development of HDTV technology, and as the MUSE analog format was proposed by Japan's public broadcaster NHK as a worldwide standard, Japanese advancements were seen as pacesetters that threatened to eclipse U.S. electronics companies. Until June 1990, the Japanese MUSE standard—based on an analog system—was the front-runner among the more than 23 different technical concepts under consideration.
Between 1988 and 1991, several European organizations were working on DCT-based digital video coding standards for both SDTV and HDTV. The EU 256 project by the CMTT and ETSI, along with research by Italian broadcaster RAI, developed a DCT video codec that broadcast SDTV at 34Mbit/s and near-studio-quality HDTV at about 70140 Mbit/s. RAI demonstrated this with a 1990 FIFA World Cup broadcast in March 1990. An American company, General Instrument, also demonstrated the feasibility of a digital television signal in 1990. This led to the FCC being persuaded to delay its decision on an ATV standard until a digitally based standard could be developed.
In March 1990, when it became clear that a digital standard was feasible, the FCC made a number of critical decisions. First, the Commission declared that the new TV standard must be more than an enhanced analog signal, but be able to provide a genuine HDTV signal with at least twice the resolution of existing television images. Then, to ensure that viewers who did not wish to buy a new digital television set could continue to receive conventional television broadcasts, it dictated that the new ATV standard must be capable of being "simulcast" on different channels. The new ATV standard also allowed the new DTV signal to be based on entirely new design principles. Although incompatible with the existing NTSC standard, the new DTV standard would be able to incorporate many improvements.
The final standard adopted by the FCC did not require a single standard for scanning formats, aspect ratios, or lines of resolution. This outcome resulted from a dispute between the consumer electronics industry (joined by some broadcasters) and the computer industry (joined by the film industry and some public interest groups) over which of the two scanning processes—interlaced or progressive—is superior. Interlaced scanning, which is used in televisions worldwide, scans even-numbered lines first, then odd-numbered ones. Progressive scanning, which is the format used in computers, scans lines in sequences, from top to bottom. The computer industry argued that progressive scanning is superior because it does not "flicker" in the manner of interlaced scanning. It also argued that progressive scanning enables easier connections with the Internet, and is more cheaply converted to interlaced formats than vice versa. The film industry also supported progressive scanning because it offers a more efficient means of converting filmed programming into digital formats. For their part, the consumer electronics industry and broadcasters argued that interlaced scanning was the only technology that could transmit the highest quality pictures then (and currently) feasible, i.e., 1,080 lines per picture and 1,920 pixels per line. Broadcasters also favored interlaced scanning because their vast archive of interlaced programming is not readily compatible with a progressive format.
Inaugural launches
DirecTV in the U.S. launched the first commercial digital satellite platform in May 1994, using the Digital Satellite System (DSS) standard. Digital cable broadcasts were tested and launched in the U.S. in 1996 by TCI and Time Warner. The first digital terrestrial platform was launched in November 1998 as ONdigital in the United Kingdom, using the DVB-T standard.
Technical information
Formats and bandwidth
Digital television supports many different picture formats defined by the broadcast television systems which are a combination of size and aspect ratio (width to height ratio).
With digital terrestrial television (DTT) broadcasting, the range of formats can be broadly divided into two categories: high-definition television (HDTV) for the transmission of high-definition video and standard-definition television (SDTV). These terms by themselves are not very precise, and many subtle intermediate cases exist.
One of several different HDTV formats that can be transmitted over DTV is: 1280 × 720 pixels in progressive scan mode (abbreviated 720p) or 1920 × 1080 pixels in interlaced video mode (1080i). Each of these uses a 16:9 aspect ratio. HDTV cannot be transmitted over analog television channels because of channel capacity issues.
SDTV, by comparison, may use one of several different formats taking the form of various aspect ratios depending on the technology used in the country of broadcast. In terms of rectangular pixels, NTSC countries can deliver a 640 × 480 resolution in 4:3 and 854 × 480 in 16:9, while PAL can give 768 × 576 in 4:3 and 1024 × 576 in 16:9. However, broadcasters may choose to reduce these resolutions to reduce bit rate (e.g., many DVB-T channels in the United Kingdom use a horizontal resolution of 544 or 704 pixels per line).
Each commercial broadcasting terrestrial television DTV channel in North America is permitted to be broadcast at a bit rate up to 19 megabits per second. However, the broadcaster does not need to use this entire bandwidth for just one broadcast channel. Instead the broadcast can use the channel to include PSIP and can also subdivide across several video subchannels (a.k.a. feeds) of varying quality and compression rates, including non-video datacasting services that allow one-way high-bit-rate streaming of data to computers like National Datacast.
A broadcaster may opt to use a standard-definition (SDTV) digital signal instead of an HDTV signal, because current convention allows the bandwidth of a DTV channel (or "multiplex") to be subdivided into multiple digital subchannels, (similar to what most FM radio stations offer with HD Radio), providing multiple feeds of entirely different television programming on the same channel. This ability to provide either a single HDTV feed or multiple lower-resolution feeds is often referred to as distributing one's "bit budget" or multicasting. This can sometimes be arranged automatically, using a statistical multiplexer (or "stat-mux"). With some implementations, image resolution may be less directly limited by bandwidth; for example in DVB-T, broadcasters can choose from several different modulation schemes, giving them the option to reduce the transmission bit rate and make reception easier for more distant or mobile viewers.
Receiving digital signal
There are several different ways to receive digital television. One of the oldest means of receiving DTV (and TV in general) is from terrestrial transmitters using an antenna (known as an aerial in some countries). This way is known as Digital terrestrial television (DTT). With DTT, viewers are limited to channels that have a terrestrial transmitter in range of their antenna.
Other ways have been devised to receive digital television. Among the most familiar to people are digital cable and digital satellite. In some countries where transmissions of TV signals are normally achieved by microwaves, digital MMDS is used. Other standards, such as Digital multimedia broadcasting (DMB) and DVB-H, have been devised to allow handheld devices such as mobile phones to receive TV signals. Another way is IPTV, that is receiving TV via Internet Protocol, relying on digital subscriber line (DSL) or optical cable line. Finally, an alternative way is to receive digital TV signals via the open Internet (Internet television), whether from a central streaming service or a P2P (peer-to-peer) system.
Some signals carry encryption and specify use conditions (such as "may not be recorded" or "may not be viewed on displays larger than 1 m in diagonal measure") backed up with the force of law under the World Intellectual Property Organization Copyright Treaty (WIPO Copyright Treaty) and national legislation implementing it, such as the U.S. Digital Millennium Copyright Act. Access to encrypted channels can be controlled by a removable smart card, for example via the Common Interface (DVB-CI) standard for Europe and via Point Of Deployment (POD) for IS or named differently CableCard.
Protection parameters for terrestrial DTV broadcasting
Digital television signals must not interfere with each other, and they must also coexist with analog television until it is phased out.
The following table gives allowable signal-to-noise and signal-to-interference ratios for various interference scenarios. This table is a crucial regulatory tool for controlling the placement and power levels of stations. Digital TV is more tolerant of interference than analog TV, and this is the reason a smaller range of channels can carry an all-digital set of television stations.
Interaction
People can interact with a DTV system in various ways. One can, for example, browse the electronic program guide. Modern DTV systems sometimes use a return path providing feedback from the end user to the broadcaster. This is possible with a coaxial or fiber optic cable, a dialup modem, or Internet connection but is not possible with a standard antenna.
Some of these systems support video on demand using a communication channel localized to a neighborhood rather than a city (terrestrial) or an even larger area (satellite).
1-segment broadcasting
1seg (1-segment) is a special form of ISDB. Each channel is further divided into 13 segments. The 12 segments of them are allocated for HDTV and remaining segment, the 13th, is used for narrow-band receivers such as mobile television or cell phone.
Timeline of transition
Comparison of analog vs digital
DTV has several advantages over analog TV, the most significant being that digital channels take up less bandwidth, and the bandwidth needs are continuously variable, at a corresponding reduction in image quality depending on the level of compression as well as the resolution of the transmitted image. This means that digital broadcasters can provide more digital channels in the same space, provide high-definition television service, or provide other non-television services such as multimedia or interactivity. DTV also permits special services such as multiplexing (more than one program on the same channel), electronic program guides and additional languages (spoken or subtitled). The sale of non-television services may provide an additional revenue source.
Digital and analog signals react to interference differently. For example, common problems with analog television include ghosting of images, noise from weak signals, and many other potential problems which degrade the quality of the image and sound, although the program material may still be watchable. With digital television, the audio and video must be synchronized digitally, so reception of the digital signal must be very nearly complete; otherwise, neither audio nor video will be usable. Short of this complete failure, "blocky" video is seen when the digital signal experiences interference.
Analog TV began with monophonic sound, and later developed multichannel television sound with two independent audio signal channels. DTV allows up to 5 audio signal channels plus a subwoofer bass channel, with broadcasts similar in quality to movie theaters and DVDs.
Digital TV signals require less transmission power than analog TV signals to be broadcast and received satisfactorily.
Compression artifacts, picture quality monitoring, and allocated bandwidth
DTV images have some picture defects that are not present on analog television or motion picture cinema, because of present-day limitations of bit rate and compression algorithms such as MPEG-2. This defect is sometimes referred to as "mosquito noise".
Because of the way the human visual system works, defects in an image that are localized to particular features of the image or that come and go are more perceptible than defects that are uniform and constant. However, the DTV system is designed to take advantage of other limitations of the human visual system to help mask these flaws, e.g. by allowing more compression artifacts during fast motion where the eye cannot track and resolve them as easily and, conversely, minimizing artifacts in still backgrounds that may be closely examined in a scene (since time allows).
Broadcast, cable, satellite, and Internet DTV operators control the picture quality of television signal encodes using sophisticated, neuroscience-based algorithms, such as the structural similarity (SSIM) video quality measurement tool, which was accorded each of its inventors a Primetime Emmy because of its global use. Another tool, called Visual Information Fidelity (VIF), is a top-performing algorithm at the core of the Netflix VMAF video quality monitoring system, which accounts for about 35% of all U.S. bandwidth consumption.
Effects of poor reception
Changes in signal reception from factors such as degrading antenna connections or changing weather conditions may gradually reduce the quality of analog TV. The nature of digital TV results in a perfectly decodable video initially, until the receiving equipment starts picking up interference that overpowers the desired signal or if the signal is too weak to decode. Some equipment will show a garbled picture with significant damage, while other devices may go directly from perfectly decodable video to no video at all or lock up. This phenomenon is known as the digital cliff effect.
Block error may occur when transmission is done with compressed images. A block error in a single frame often results in black boxes in several subsequent frames, making viewing difficult.
For remote locations, distant channels that, as analog signals, were previously usable in a snowy and degraded state may, as digital signals, be perfectly decodable or may become completely unavailable. The use of higher frequencies will add to these problems, especially in cases where a clear line-of-sight from the receiving antenna to the transmitter is not available, because usually higher frequency signals can't pass through obstacles as easily.
Effect on old analog technology
Television sets with only analog tuners cannot decode digital transmissions. When analog broadcasting over the air ceases, users of sets with analog-only tuners may use other sources of programming (e.g. cable, recorded media) or may purchase set-top converter boxes to tune in the digital signals. In the United States, a government-sponsored coupon was available to offset the cost of an external converter box. Analog switch-off (of full-power stations) took place on December 11, 2006 in The Netherlands, June 12, 2009 in the United States for full-power stations, and later for Class-A Stations on September 1, 2016, July 24, 2011 in Japan, August 31, 2011 in Canada, February 13, 2012 in Arab states, May 1, 2012 in Germany, October 24, 2012 in the United Kingdom and Ireland, October 31, 2012 in selected Indian cities, and December 10, 2013 in Australia. Completion of analog switch-off is scheduled for December 31, 2017 in the whole of India, December 2018 in Costa Rica and around 2020 for the Philippines.
Disappearance of TV-audio receivers
Prior to the conversion to digital TV, analog television broadcast audio for TV channels on a separate FM carrier signal from the video signal. This FM audio signal could be heard using standard radios equipped with the appropriate tuning circuits.
However, after the transition of many countries to digital TV, no portable radio manufacturer has yet developed an alternative method for portable radios to play just the audio signal of digital TV channels; DTV radio is not the same thing.
Environmental issues
The adoption of a broadcast standard incompatible with existing analog receivers has created the problem of large numbers of analog receivers being discarded during digital television transition. One superintendent of public works was quoted in 2009 saying; "some of the studies I’ve read in the trade magazines say up to a quarter of American households could be throwing a TV out in the next two years following the regulation change". In 2009, an estimated 99 million analog TV receivers were sitting unused in homes in the US alone and, while some obsolete receivers are being retrofitted with converters, many more are simply dumped in landfills where they represent a source of toxic metals such as lead as well as lesser amounts of materials such as barium, cadmium and chromium.
According to one campaign group, a CRT computer monitor or TV contains an average of of lead. According to another source, the lead in glass of a CRT varies from 1.08 lb to 11.28 lb, depending on screen size and type, but the lead is in the form of "stable and immobile" lead oxide mixed into the glass. It is claimed that the lead can have long-term negative effects on the environment if dumped as landfill. However, the glass envelope can be recycled at suitably equipped facilities. Other portions of the receiver may be subject to disposal as hazardous material.
Local restrictions on disposal of these materials vary widely; in some cases second-hand stores have refused to accept working color television receivers for resale due to the increasing costs of disposing of unsold TVs. Those thrift stores which are still accepting donated TVs have reported significant increases in good-condition working used television receivers abandoned by viewers who often expect them not to work after digital transition.
In Michigan in 2009, one recycler estimated that as many as one household in four would dispose of or recycle a TV set in the following year. The digital television transition, migration to high-definition television receivers and the replacement of CRTs with flatscreens are all factors in the increasing number of discarded analog CRT-based television receivers.
See also
Autoroll
Broadcast television systems
Digital television in the United States
Digital terrestrial television
Text to Speech in Digital Television
References
Further reading
Hart, Jeffrey A., Television, technology, and competition : HDTV and digital TV in the United States, Western Europe, and Japan, New York : Cambridge University Press, 2004.
External links
Overview of Digital Television Development Worldwide Proceedings of the IEEE, VOL. 94, NO. 1, JANUARY 2006 (University of Texas at San Antonio)
The FCC's U.S. consumer-oriented DTV website
Digital TV Consumer test reports - UK Government-funded website to support Digital Switchover
History of television
Film and video technology
Television terminology
Television
Japanese inventions
Telecommunications-related introductions in the 1990s |
8339 | https://en.wikipedia.org/wiki/Domain%20Name%20System | Domain Name System | The Domain Name System (DNS) is the hierarchical and decentralized naming system used to identify computers, services, and other resources reachable through the Internet or other Internet Protocol (IP) networks. The resource records contained in the DNS associate domain names with other forms of information. These are most commonly used to map human-friendly domain names to the numerical IP addresses computers need to locate services and devices using the underlying network protocols, but have been extended over time to perform many other functions as well. The Domain Name System has been an essential component of the functionality of the Internet since 1985.
Function
An often-used analogy to explain the Domain Name System is that it serves as the phone book for the Internet by translating human-friendly computer hostnames into IP addresses. For example, the domain name www.example.com translates to the addresses (IPv4) and (IPv6). The DNS can be quickly and transparently updated, allowing a service's location on the network to change without affecting the end users, who continue to use the same hostname. Users take advantage of this when they use meaningful Uniform Resource Locators (URLs) and e-mail addresses without having to know how the computer actually locates the services.
An important and ubiquitous function of the DNS is its central role in distributed Internet services such as cloud services and content delivery networks. When a user accesses a distributed Internet service using a URL, the domain name of the URL is translated to the IP address of a server that is proximal to the user. The key functionality of the DNS exploited here is that different users can simultaneously receive different translations for the same domain name, a key point of divergence from a traditional phone-book view of the DNS. This process of using the DNS to assign proximal servers to users is key to providing faster and more reliable responses on the Internet and is widely used by most major Internet services.
The DNS reflects the structure of administrative responsibility in the Internet. Each subdomain is a zone of administrative autonomy delegated to a manager. For zones operated by a registry, administrative information is often complemented by the registry's RDAP and WHOIS services. That data can be used to gain insight on, and track responsibility for, a given host on the Internet.
History
Using a simpler, more memorable name in place of a host's numerical address dates back to the ARPANET era. The Stanford Research Institute (now SRI International) maintained a text file named HOSTS.TXT that mapped host names to the numerical addresses of computers on the ARPANET. Elizabeth Feinler developed and maintained the first ARPANET directory. Maintenance of numerical addresses, called the Assigned Numbers List, was handled by Jon Postel at the University of Southern California's Information Sciences Institute (ISI), whose team worked closely with SRI.
Addresses were assigned manually. Computers, including their hostnames and addresses, were added to the primary file by contacting the SRI Network Information Center (NIC), directed by Feinler, telephone during business hours. Later, Feinler set up a WHOIS directory on a server in the NIC for retrieval of information about resources, contacts, and entities. She and her team developed the concept of domains. Feinler suggested that domains should be based on the location of the physical address of the computer. Computers at educational institutions would have the domain edu, for example. She and her team managed the Host Naming Registry from 1972 to 1989.
By the early 1980s, maintaining a single, centralized host table had become slow and unwieldy and the emerging network required an automated naming system to address technical and personnel issues. Postel directed the task of forging a compromise between five competing proposals of solutions to Paul Mockapetris. Mockapetris instead created the Domain Name System in 1983.
The Internet Engineering Task Force published the original specifications in RFC 882 and RFC 883 in November 1983.
In 1984, four UC Berkeley students, Douglas Terry, Mark Painter, David Riggle, and Songnian Zhou, wrote the first Unix name server implementation for the Berkeley Internet Name Domain, commonly referred to as BIND. In 1985, Kevin Dunlap of DEC substantially revised the DNS implementation. Mike Karels, Phil Almquist, and Paul Vixie have maintained BIND since then. In the early 1990s, BIND was ported to the Windows NT platform.
In November 1987, RFC 1034 and RFC 1035 superseded the 1983 DNS specifications. Several additional Request for Comments have proposed extensions to the core DNS protocols.
Structure
Domain name space
The domain name space consists of a tree data structure. Each node or leaf in the tree has a label and zero or more resource records (RR), which hold information associated with the domain name. The domain name itself consists of the label, concatenated with the name of its parent node on the right, separated by a dot.
The tree sub-divides into zones beginning at the root zone. A DNS zone may consist of only one domain, or may consist of many domains and sub-domains, depending on the administrative choices of the zone manager. DNS can also be partitioned according to class where the separate classes can be thought of as an array of parallel namespace trees.
Administrative responsibility for any zone may be divided by creating additional zones. Authority over the new zone is said to be delegated to a designated name server. The parent zone ceases to be authoritative for the new zone.
Domain name syntax, internationalization
The definitive descriptions of the rules for forming domain names appear in RFC 1035, RFC 1123, RFC 2181, and RFC 5892. A domain name consists of one or more parts, technically called labels, that are conventionally concatenated, and delimited by dots, such as example.com.
The right-most label conveys the top-level domain; for example, the domain name www.example.com belongs to the top-level domain com.
The hierarchy of domains descends from right to left; each label to the left specifies a subdivision, or subdomain of the domain to the right. For example, the label example specifies a subdomain of the com domain, and www is a subdomain of example.com. This tree of subdivisions may have up to 127 levels.
A label may contain zero to 63 characters. The null label, of length zero, is reserved for the root zone. The full domain name may not exceed the length of 253 characters in its textual representation. In the internal binary representation of the DNS the maximum length requires 255 octets of storage, as it also stores the length of the name.
Although no technical limitation exists to prevent domain name labels using any character which is representable by an octet, hostnames use a preferred format and character set. The characters allowed in labels are a subset of the ASCII character set, consisting of characters a through z, A through Z, digits 0 through 9, and hyphen. This rule is known as the LDH rule (letters, digits, hyphen). Domain names are interpreted in case-independent manner. Labels may not start or end with a hyphen. An additional rule requires that top-level domain names should not be all-numeric.
The limited set of ASCII characters permitted in the DNS prevented the representation of names and words of many languages in their native alphabets or scripts. To make this possible, ICANN approved the Internationalizing Domain Names in Applications (IDNA) system, by which user applications, such as web browsers, map Unicode strings into the valid DNS character set using Punycode. In 2009 ICANN approved the installation of internationalized domain name country code top-level domains (ccTLDs). In addition, many registries of the existing top-level domain names (TLDs) have adopted the IDNA system, guided by RFC 5890, RFC 5891, RFC 5892, RFC 5893.
Name servers
The Domain Name System is maintained by a distributed database system, which uses the client–server model. The nodes of this database are the name servers. Each domain has at least one authoritative DNS server that publishes information about that domain and the name servers of any domains subordinate to it. The top of the hierarchy is served by the root name servers, the servers to query when looking up (resolving) a TLD.
Authoritative name server
An authoritative name server is a name server that only gives answers to DNS queries from data that has been configured by an original source, for example, the domain administrator or by dynamic DNS methods, in contrast to answers obtained via a query to another name server that only maintains a cache of data.
An authoritative name server can either be a primary server or a secondary server. Historically the terms master/slave and primary/secondary were sometimes used interchangeably but the current practice is to use the latter form. A primary server is a server that stores the original copies of all zone records. A secondary server uses a special automatic updating mechanism in the DNS protocol in communication with its primary to maintain an identical copy of the primary records.
Every DNS zone must be assigned a set of authoritative name servers. This set of servers is stored in the parent domain zone with name server (NS) records.
An authoritative server indicates its status of supplying definitive answers, deemed authoritative, by setting a protocol flag, called the "Authoritative Answer" (AA) bit in its responses. This flag is usually reproduced prominently in the output of DNS administration query tools, such as dig, to indicate that the responding name server is an authority for the domain name in question.
When a name server is designated as the authoritative server for a domain name for which it does not have authoritative data, it presents a type of error called a "lame delegation" or "lame response".
Operation
Address resolution mechanism
Domain name resolvers determine the domain name servers responsible for the domain name in question by a sequence of queries starting with the right-most (top-level) domain label.
For proper operation of its domain name resolver, a network host is configured with an initial cache (hints) of the known addresses of the root name servers. The hints are updated periodically by an administrator by retrieving a dataset from a reliable source.
Assuming the resolver has no cached records to accelerate the process, the resolution process starts with a query to one of the root servers. In typical operation, the root servers do not answer directly, but respond with a referral to more authoritative servers, e.g., a query for "www.wikipedia.org" is referred to the org servers. The resolver now queries the servers referred to, and iteratively repeats this process until it receives an authoritative answer. The diagram illustrates this process for the host that is named by the fully qualified domain name "www.wikipedia.org".
This mechanism would place a large traffic burden on the root servers, if every resolution on the Internet required starting at the root. In practice caching is used in DNS servers to off-load the root servers, and as a result, root name servers actually are involved in only a relatively small fraction of all requests.
Recursive and caching name server
In theory, authoritative name servers are sufficient for the operation of the Internet. However, with only authoritative name servers operating, every DNS query must start with recursive queries at the root zone of the Domain Name System and each user system would have to implement resolver software capable of recursive operation.
To improve efficiency, reduce DNS traffic across the Internet, and increase performance in end-user applications, the Domain Name System supports DNS cache servers which store DNS query results for a period of time determined in the configuration (time-to-live) of the domain name record in question.
Typically, such caching DNS servers also implement the recursive algorithm necessary to resolve a given name starting with the DNS root through to the authoritative name servers of the queried domain. With this function implemented in the name server, user applications gain efficiency in design and operation.
The combination of DNS caching and recursive functions in a name server is not mandatory; the functions can be implemented independently in servers for special purposes.
Internet service providers typically provide recursive and caching name servers for their customers. In addition, many home networking routers implement DNS caches and recursion to improve efficiency in the local network.
DNS resolvers
The client side of the DNS is called a DNS resolver. A resolver is responsible for initiating and sequencing the queries that ultimately lead to a full resolution (translation) of the resource sought, e.g., translation of a domain name into an IP address. DNS resolvers are classified by a variety of query methods, such as recursive, non-recursive, and iterative. A resolution process may use a combination of these methods.
In a non-recursive query, a DNS resolver queries a DNS server that provides a record either for which the server is authoritative, or it provides a partial result without querying other servers. In case of a caching DNS resolver, the non-recursive query of its local DNS cache delivers a result and reduces the load on upstream DNS servers by caching DNS resource records for a period of time after an initial response from upstream DNS servers.
In a recursive query, a DNS resolver queries a single DNS server, which may in turn query other DNS servers on behalf of the requester. For example, a simple stub resolver running on a home router typically makes a recursive query to the DNS server run by the user's ISP. A recursive query is one for which the DNS server answers the query completely by querying other name servers as needed. In typical operation, a client issues a recursive query to a caching recursive DNS server, which subsequently issues non-recursive queries to determine the answer and send a single answer back to the client. The resolver, or another DNS server acting recursively on behalf of the resolver, negotiates use of recursive service using bits in the query headers. DNS servers are not required to support recursive queries.
The iterative query procedure is a process in which a DNS resolver queries a chain of one or more DNS servers. Each server refers the client to the next server in the chain, until the current server can fully resolve the request. For example, a possible resolution of www.example.com would query a global root server, then a "com" server, and finally an "example.com" server.
Circular dependencies and glue records
Name servers in delegations are identified by name, rather than by IP address. This means that a resolving name server must issue another DNS request to find out the IP address of the server to which it has been referred. If the name given in the delegation is a subdomain of the domain for which the delegation is being provided, there is a circular dependency.
In this case, the name server providing the delegation must also provide one or more IP addresses for the authoritative name server mentioned in the delegation. This information is called glue. The delegating name server provides this glue in the form of records in the additional section of the DNS response, and provides the delegation in the authority section of the response. A glue record is a combination of the name server and IP address.
For example, if the authoritative name server for example.org is ns1.example.org, a computer trying to resolve www.example.org first resolves ns1.example.org. As ns1 is contained in example.org, this requires resolving example.org first, which presents a circular dependency. To break the dependency, the name server for the top level domain org includes glue along with the delegation for example.org. The glue records are address records that provide IP addresses for ns1.example.org. The resolver uses one or more of these IP addresses to query one of the domain's authoritative servers, which allows it to complete the DNS query.
Record caching
A standard practice in implementing name resolution in applications is to reduce the load on the Domain Name System servers by caching results locally, or in intermediate resolver hosts. Results obtained from a DNS request are always associated with the time to live (TTL), an expiration time after which the results must be discarded or refreshed. The TTL is set by the administrator of the authoritative DNS server. The period of validity may vary from a few seconds to days or even weeks.
As a result of this distributed caching architecture, changes to DNS records do not propagate throughout the network immediately, but require all caches to expire and to be refreshed after the TTL. RFC 1912 conveys basic rules for determining appropriate TTL values.
Some resolvers may override TTL values, as the protocol supports caching for up to sixty-eight years or no caching at all. Negative caching, i.e. the caching of the fact of non-existence of a record, is determined by name servers authoritative for a zone which must include the Start of Authority (SOA) record when reporting no data of the requested type exists. The value of the minimum field of the SOA record and the TTL of the SOA itself is used to establish the TTL for the negative answer.
Reverse lookup
A reverse DNS lookup is a query of the DNS for domain names when the IP address is known. Multiple domain names may be associated with an IP address. The DNS stores IP addresses in the form of domain names as specially formatted names in pointer (PTR) records within the infrastructure top-level domain arpa. For IPv4, the domain is in-addr.arpa. For IPv6, the reverse lookup domain is ip6.arpa. The IP address is represented as a name in reverse-ordered octet representation for IPv4, and reverse-ordered nibble representation for IPv6.
When performing a reverse lookup, the DNS client converts the address into these formats before querying the name for a PTR record following the delegation chain as for any DNS query. For example, assuming the IPv4 address 208.80.152.2 is assigned to Wikimedia, it is represented as a DNS name in reverse order: 2.152.80.208.in-addr.arpa. When the DNS resolver gets a pointer (PTR) request, it begins by querying the root servers, which point to the servers of American Registry for Internet Numbers (ARIN) for the 208.in-addr.arpa zone. ARIN's servers delegate 152.80.208.in-addr.arpa to Wikimedia to which the resolver sends another query for 2.152.80.208.in-addr.arpa, which results in an authoritative response.
Client lookup
Users generally do not communicate directly with a DNS resolver. Instead DNS resolution takes place transparently in applications such as web browsers, e-mail clients, and other Internet applications. When an application makes a request that requires a domain name lookup, such programs send a resolution request to the DNS resolver in the local operating system, which in turn handles the communications required.
The DNS resolver will almost invariably have a cache (see above) containing recent lookups. If the cache can provide the answer to the request, the resolver will return the value in the cache to the program that made the request. If the cache does not contain the answer, the resolver will send the request to one or more designated DNS servers. In the case of most home users, the Internet service provider to which the machine connects will usually supply this DNS server: such a user will either have configured that server's address manually or allowed DHCP to set it; however, where systems administrators have configured systems to use their own DNS servers, their DNS resolvers point to separately maintained name servers of the organization. In any event, the name server thus queried will follow the process outlined above, until it either successfully finds a result or does not. It then returns its results to the DNS resolver; assuming it has found a result, the resolver duly caches that result for future use, and hands the result back to the software which initiated the request.
Broken resolvers
Some large ISPs have configured their DNS servers to violate rules, such as by disobeying TTLs, or by indicating that a domain name does not exist just because one of its name servers does not respond.
Some applications such as web browsers maintain an internal DNS cache to avoid repeated lookups via the network. This practice can add extra difficulty when debugging DNS issues as it obscures the history of such data. These caches typically use very short caching times on the order of one minute.
Internet Explorer represents a notable exception: versions up to IE 3.x cache DNS records for 24 hours by default. Internet Explorer 4.x and later versions (up to IE 8) decrease the default timeout value to half an hour, which may be changed by modifying the default configuration.
When Google Chrome detects issues with the DNS server it displays a specific error message.
Other applications
The Domain Name System includes several other functions and features.
Hostnames and IP addresses are not required to match in a one-to-one relationship. Multiple hostnames may correspond to a single IP address, which is useful in virtual hosting, in which many web sites are served from a single host. Alternatively, a single hostname may resolve to many IP addresses to facilitate fault tolerance and load distribution to multiple server instances across an enterprise or the global Internet.
DNS serves other purposes in addition to translating names to IP addresses. For instance, mail transfer agents use DNS to find the best mail server to deliver e-mail: An MX record provides a mapping between a domain and a mail exchanger; this can provide an additional layer of fault tolerance and load distribution.
The DNS is used for efficient storage and distribution of IP addresses of blacklisted email hosts. A common method is to place the IP address of the subject host into the sub-domain of a higher level domain name, and to resolve that name to a record that indicates a positive or a negative indication.
For example:
The address 102.3.4.5 is blacklisted. It points to 5.4.3.102.blacklist.example, which resolves to 127.0.0.1.
The address 102.3.4.6 is not blacklisted and points to 6.4.3.102.blacklist.example. This hostname is either not configured, or resolves to 127.0.0.2.
E-mail servers can query blacklist.example to find out if a specific host connecting to them is in the blacklist. Many of such blacklists, either subscription-based or free of cost, are available for use by email administrators and anti-spam software.
To provide resilience in the event of computer or network failure, multiple DNS servers are usually provided for coverage of each domain. At the top level of global DNS, thirteen groups of root name servers exist, with additional "copies" of them distributed worldwide via anycast addressing.
Dynamic DNS (DDNS) updates a DNS server with a client IP address on-the-fly, for example, when moving between ISPs or mobile hot spots, or when the IP address changes administratively.
DNS message format
The DNS protocol uses two types of DNS messages, queries and replies; both have the same format. Each message consists of a header and four sections: question, answer, authority, and an additional space. A header field (flags) controls the content of these four sections.
The header section consists of the following fields: Identification, Flags, Number of questions, Number of answers, Number of authority resource records (RRs), and Number of additional RRs. Each field is 16 bits long, and appears in the order given. The identification field is used to match responses with queries. The flag field consists of sub-fields as follows:
After the flag, the header ends with four 16-bit integers which contain the number of records in each of the sections that follow, in the same order.
Question section
The question section has a simpler format than the resource record format used in the other sections. Each question record (there is usually just one in the section) contains the following fields:
The domain name is broken into discrete labels which are concatenated; each label is prefixed by the length of that label.
DNS transport protocols
DNS-over-UDP/53 ("Do53")
From the time of its origin in 1983 until quite recently, DNS has primarily answered queries on User Datagram Protocol (UDP) port number 53. Such queries consist of a clear-text request sent in a single UDP packet from the client, responded to with a clear-text reply sent in a single UDP packet from the server. When the length of the answer exceeds 512 bytes and both client and server support Extension Mechanisms for DNS (EDNS), larger UDP packets may be used. Use of DNS-over-UDP is limited by, among other things, its lack of transport-layer encryption, authentication, reliable delivery, and message length.
DNS-over-TCP/53 ("Do53/TCP")
In 1989, RFC 1123 specified optional Transmission Control Protocol (TCP) transport for DNS queries, replies and, particularly, zone transfers. Via fragmentation of long replies, TCP allows longer responses, reliable delivery, and re-use of long-lived connections between clients and servers.
DNSCrypt
The DNSCrypt protocol, which was developed in 2011 outside the IETF standards framework, introduced DNS encryption on the downstream side of recursive resolvers, wherein clients encrypt query payloads using servers' public keys, which are published in the DNS (rather than relying upon third-party certificate authorities) and which may in turn be protected by DNSSEC signatures. DNSCrypt uses either TCP or UDP port 443, the same port as HTTPS encrypted web traffic. This introduced not only privacy regarding the content of the query, but also a significant measure of firewall-traversal capability. In 2019, DNSCrypt was further extended to support an "anonymized" mode, similar to the proposed "Oblivious DNS," in which an ingress node receives a query which has been encrypted with the public key of a different server, and relays it to that server, which acts as an egress node, performing the recursive resolution. Privacy of user/query pairs is created, since the ingress node does not know the content of the query, while the egress nodes does not know the identity of the client. DNSCrypt was first implemented in production by OpenDNS in December of 2011.
DNS-over-TLS ("DoT")
An IETF standard for encrypted DNS emerged in 2016, utilizing standard Transport Layer Security (TLS) to protect the entire connection, rather than just the DNS payload. DoT servers listen on TCP port 853. RFC7858 specifies that opportunistic encryption and authenticated encryption may be supported, but did not make either server or client authentication mandatory.
DNS-over-HTTPS ("DoH")
A competing standard for DNS query transport was introduced in 2018, tunneling DNS query data over HTTPS (which in turn transports HTTP over TLS). DoH was promoted as a more web-friendly alternative to DNS since, like DNSCrypt, it travels on TCP port 443, and thus looks similar to web traffic, though they are easily differentiable in practice. DoH has been widely criticized for decreasing user anonymity relative to DoT.
DNS-over-TOR
Like other Internet protocols, DNS may be run over VPNs and tunnels. One use which has become common enough since 2019 to warrant its own frequently used acronym is DNS-over-Tor. The privacy gains of Oblivious DNS can be garnered through the use of the preexisting Tor network of ingress and egress nodes, paired with the transport-layer encryption provided by TLS.
Oblivious DNS-over-HTTPS ("ODoH")
In 2021, an "oblivious" implementation of DoH was proposed and has been implemented in draft form, combining ingress/egress separation with HTTPS tunneling and TLS transport-layer encryption in a single defined protocol.
Resource records
The Domain Name System specifies a database of information elements for network resources. The types of information elements are categorized and organized with a list of DNS record types, the resource records (RRs). Each record has a type (name and number), an expiration time (time to live), a class, and type-specific data. Resource records of the same type are described as a resource record set (RRset), having no special ordering. DNS resolvers return the entire set upon query, but servers may implement round-robin ordering to achieve load balancing. In contrast, the Domain Name System Security Extensions (DNSSEC) work on the complete set of resource record in canonical order.
When sent over an Internet Protocol network, all records use the common format specified in RFC 1035:
NAME is the fully qualified domain name of the node in the tree . On the wire, the name may be shortened using label compression where ends of domain names mentioned earlier in the packet can be substituted for the end of the current domain name.
TYPE is the record type. It indicates the format of the data and it gives a hint of its intended use. For example, the A record is used to translate from a domain name to an IPv4 address, the NS record lists which name servers can answer lookups on a DNS zone, and the MX record specifies the mail server used to handle mail for a domain specified in an e-mail address.
RDATA is data of type-specific relevance, such as the IP address for address records, or the priority and hostname for MX records. Well known record types may use label compression in the RDATA field, but "unknown" record types must not (RFC 3597).
The CLASS of a record is set to IN (for Internet) for common DNS records involving Internet hostnames, servers, or IP addresses. In addition, the classes Chaos (CH) and Hesiod (HS) exist. Each class is an independent name space with potentially different delegations of DNS zones.
In addition to resource records defined in a zone file, the domain name system also defines several request types that are used only in communication with other DNS nodes (on the wire), such as when performing zone transfers (AXFR/IXFR) or for EDNS (OPT).
Wildcard DNS records
The domain name system supports wildcard DNS records which specify names that start with the asterisk label, '*', e.g., *.example. DNS records belonging to wildcard domain names specify rules for generating resource records within a single DNS zone by substituting whole labels with matching components of the query name, including any specified descendants. For example, in the following configuration, the DNS zone x.example specifies that all subdomains, including subdomains of subdomains, of x.example use the mail exchanger (MX) a.x.example. The A record for a.x.example is needed to specify the mail exchanger IP address. As this has the result of excluding this domain name and its subdomains from the wildcard matches, an additional MX record for the subdomain a.x.example, as well as a wildcarded MX record for all of its subdomains, must also be defined in the DNS zone.
x.example. MX 10 a.x.example.
*.x.example. MX 10 a.x.example.
*.a.x.example. MX 10 a.x.example.
a.x.example. MX 10 a.x.example.
a.x.example. AAAA 2001:db8::1
The role of wildcard records was refined in , because the original definition in was incomplete and resulted in misinterpretations by implementers.
Protocol extensions
The original DNS protocol had limited provisions for extension with new features. In 1999, Paul Vixie published in RFC 2671 (superseded by RFC 6891) an extension mechanism, called Extension Mechanisms for DNS (EDNS) that introduced optional protocol elements without increasing overhead when not in use. This was accomplished through the OPT pseudo-resource record that only exists in wire transmissions of the protocol, but not in any zone files. Initial extensions were also suggested (EDNS0), such as increasing the DNS message size in UDP datagrams.
Dynamic zone updates
Dynamic DNS updates use the UPDATE DNS opcode to add or remove resource records dynamically from a zone database maintained on an authoritative DNS server. The feature is described in RFC 2136. This facility is useful to register network clients into the DNS when they boot or become otherwise available on the network. As a booting client may be assigned a different IP address each time from a DHCP server, it is not possible to provide static DNS assignments for such clients.
Security issues
Originally, security concerns were not major design considerations for DNS software or any software for deployment on the early Internet, as the network was not open for participation by the general public. However, the expansion of the Internet into the commercial sector in the 1990s changed the requirements for security measures to protect data integrity and user authentication.
Several vulnerability issues were discovered and exploited by malicious users. One such issue is DNS cache poisoning, in which data is distributed to caching resolvers under the pretense of being an authoritative origin server, thereby polluting the data store with potentially false information and long expiration times (time-to-live). Subsequently, legitimate application requests may be redirected to network hosts operated with malicious intent.
DNS responses traditionally do not have a cryptographic signature, leading to many attack possibilities; the Domain Name System Security Extensions (DNSSEC) modify DNS to add support for cryptographically signed responses. DNSCurve has been proposed as an alternative to DNSSEC. Other extensions, such as TSIG, add support for cryptographic authentication between trusted peers and are commonly used to authorize zone transfer or dynamic update operations.
Some domain names may be used to achieve spoofing effects. For example, and paypa1.com are different names, yet users may be unable to distinguish them in a graphical user interface depending on the user's chosen typeface. In many fonts the letter l and the numeral 1 look very similar or even identical. This problem is acute in systems that support internationalized domain names, as many character codes in ISO 10646 may appear identical on typical computer screens. This vulnerability is occasionally exploited in phishing.
Techniques such as forward-confirmed reverse DNS can also be used to help validate DNS results.
DNS can also "leak" from otherwise secure or private connections, if attention is not paid to their configuration, and at times DNS has been used to bypass firewalls by malicious persons, and exfiltrate data, since it is often seen as innocuous.
Privacy and tracking issues
Originally designed as a public, hierarchical, distributed and heavily cached database, DNS protocol has no confidentiality controls. User queries and nameserver responses are being sent unencrypted which enables network packet sniffing, DNS hijacking, DNS cache poisoning and man-in-the-middle attacks. This deficiency is commonly used by cybercriminals and network operators for marketing purposes, user authentication on captive portals and censorship.
User privacy is further exposed by proposals for increasing the level of client IP information in DNS queries (RFC 7871) for the benefit of Content Delivery Networks.
The main approaches that are in use to counter privacy issues with DNS:
VPNs, which move DNS resolution to the VPN operator and hide user traffic from local ISP,
Tor, which replaces traditional DNS resolution with anonymous .onion domains, hiding both name resolution and user traffic behind onion routing counter-surveillance,
Proxies and public DNS servers, which move the actual DNS resolution to a third-party provider, who usually promises little or no request logging and optional added features, such as DNS-level advertisement or pornography blocking.
Public DNS servers can be queried using traditional DNS protocol, in which case they provide no protection from local surveillance, or DNS-over-HTTPS, DNS-over-TLS and DNSCrypt, which do provide such protection
Solutions preventing DNS inspection by local network operator are criticized for thwarting corporate network security policies and Internet censorship. They are also criticized from a privacy point of view, as giving away the DNS resolution to the hands of a small number of companies known for monetizing user traffic and for centralizing DNS name resolution, which is generally perceived as harmful for the Internet.
Domain name registration
The right to use a domain name is delegated by domain name registrars which are accredited by the Internet Corporation for Assigned Names and Numbers (ICANN) or other organizations such as OpenNIC, that are charged with overseeing the name and number systems of the Internet. In addition to ICANN, each top-level domain (TLD) is maintained and serviced technically by an administrative organization, operating a registry. A registry is responsible for operating the database of names within its authoritative zone, although the term is most often used for TLDs. A registrant is a person or organization who asked for domain registration. The registry receives registration information from each domain name registrar, which is authorized (accredited) to assign names in the corresponding zone and publishes the information using the WHOIS protocol. As of 2015, usage of RDAP is being considered.
ICANN publishes the complete list of TLDs, TLD registries, and domain name registrars. Registrant information associated with domain names is maintained in an online database accessible with the WHOIS service. For most of the more than 290 country code top-level domains (ccTLDs), the domain registries maintain the WHOIS (Registrant, name servers, expiration dates, etc.) information. For instance, DENIC, Germany NIC, holds the DE domain data. From about 2001, most Generic top-level domain (gTLD) registries have adopted this so-called thick registry approach, i.e. keeping the WHOIS data in central registries instead of registrar databases.
For top-level domains on COM and NET, a thin registry model is used. The domain registry (e.g., GoDaddy, BigRock and PDR, VeriSign, etc., etc.) holds basic WHOIS data (i.e., registrar and name servers, etc.). Organizations, or registrants using ORG on the other hand, are on the Public Interest Registry exclusively.
Some domain name registries, often called network information centers (NIC), also function as registrars to end-users, in addition to providing access to the WHOIS datasets. The top-level domain registries, such as for the domains COM, NET, and ORG use a registry-registrar model consisting of many domain name registrars. In this method of management, the registry only manages the domain name database and the relationship with the registrars. The registrants (users of a domain name) are customers of the registrar, in some cases through additional subcontracting of resellers.
RFC documents
Standards
The Domain Name System is defined by Request for Comments (RFC) documents published by the Internet Engineering Task Force (Internet standards). The following is a list of RFCs that define the DNS protocol.
, Domain Names - Concepts and Facilities
, Domain Names - Implementation and Specification
, Requirements for Internet Hosts—Application and Support
, Incremental Zone Transfer in DNS
, A Mechanism for Prompt Notification of Zone Changes (DNS NOTIFY)
, Dynamic Updates in the domain name system (DNS UPDATE)
, Clarifications to the DNS Specification
, Negative Caching of DNS Queries (DNS NCACHE)
, Non-Terminal DNS Name Redirection
, Secret Key Transaction Authentication for DNS (TSIG)
, Indicating Resolver Support of DNSSEC
, DNSSEC and IPv6 A6 aware server/resolver message size requirements
, DNS Extensions to Support IP Version 6
, Handling of Unknown DNS Resource Record (RR) Types
, Domain Name System (DNS) Case Insensitivity Clarification
, The Role of Wildcards in the Domain Name System
, HMAC SHA TSIG Algorithm Identifiers
, DNS Name Server Identifier (NSID) Option
, Automated Updates of DNS Security (DNSSEC) Trust Anchors
, Measures for Making DNS More Resilient against Forged Answers
, Internationalized Domain Names for Applications (IDNA):Definitions and Document Framework
, Internationalized Domain Names in Applications (IDNA): Protocol
, The Unicode Code Points and Internationalized Domain Names for Applications (IDNA)
, Right-to-Left Scripts for Internationalized Domain Names for Applications (IDNA)
, Extension Mechanisms for DNS (EDNS0)
, DNS Transport over TCP - Implementation Requirements
Proposed security standards
, DNS Security Introduction and Requirements
, Resource Records for the DNS Security Extensions
, Protocol Modifications for the DNS Security Extensions
, Use of SHA-256 in DNSSEC Delegation Signer (DS) Resource Records
, Minimally Covering NSEC Records and DNSSEC On-line Signing
, DNS Security (DNSSEC) Hashed Authenticated Denial of Existence
, Use of SHA-2 Algorithms with RSA in DNSKEY and RRSIG Resource Records for DNSSEC
, Domain Name System (DNS) Security Extensions Mapping for the Extensible Provisioning Protocol (EPP)
, Use of GOST Signature Algorithms in DNSKEY and RRSIG Resource Records for DNSSEC
, The EDNS(0) Padding Option
, Specification for DNS over Transport Layer Security (TLS)
, Usage Profiles for DNS over TLS and DNS over DTLS
, DNS Queries over HTTPS (DoH)
Experimental RFCs
, New DNS RR Definitions
Best Current Practices
, Selection and Operation of Secondary DNS Servers (BCP 16)
, Classless IN-ADDR.ARPA delegation (BCP 20)
, DNS Proxy Implementation Guidelines (BCP 152)
, Domain Name System (DNS) IANA Considerations (BCP 42)
, DNS Root Name Service Protocol and Deployment Requirements (BCP 40)
Informational RFCs
These RFCs are advisory in nature, but may provide useful information despite defining neither a standard or BCP. (RFC 1796)
, Choosing a Name for Your Computer (FYI 5) , Domain Name System Structure and Delegation , Common DNS Operational and Configuration Errors , The Naming of Hosts , Application Techniques for Checking and Transformation of Names . Threat Analysis of the Domain Name System (DNS) , Requirements for a Mechanism Identifying a Name Server Instance , Internationalized Domain Names for Applications (IDNA):Background, Explanation, and Rationale , Mapping Characters for Internationalized Domain Names in Applications (IDNA) 2008 , DNS Privacy Considerations , Decreasing Access Time to Root Servers by Running One on Loopback , DNS TerminologyUnknown
These RFCs have an official status of Unknown, but due to their age are not clearly labeled as such.
, Domain Requirements – Specified original top-level domains
, Domain Administrators Guide , Domain Administrators Operations Guide , DNS Encodings of Network Names and Other Types''
See also
Alternative DNS root
Comparison of DNS server software
Domain hijacking
DNS hijacking
DNS management software
DNS over HTTPS
DNS over TLS
Hierarchical namespace
IPv6 brokenness and DNS whitelisting
Multicast DNS
Public recursive name server
resolv.conf
Split-horizon DNS
List of DNS record types
List of managed DNS providers
Zone file
DNS leak
References
Sources
External links
Zytrax.com, Open Source Guide – DNS for Rocket Scientists.
Internet Governance and the Domain Name System: Issues for Congress Congressional Research Service
Computer-related introductions in 1983
Application layer protocols
Internet Standards |
8377 | https://en.wikipedia.org/wiki/Database | Database | In computing, a database is an organized collection of data stored and accessed electronically. Small databases can be stored on a file system, while large databases are hosted on computer clusters or cloud storage. The design of databases spans formal techniques and practical considerations including data modeling, efficient data representation and storage, query languages, security and privacy of sensitive data, and distributed computing issues including supporting concurrent access and fault tolerance.
A database management system (DBMS) is the software that interacts with end users, applications, and the database itself to capture and analyze the data. The DBMS software additionally encompasses the core facilities provided to administer the database. The sum total of the database, the DBMS and the associated applications can be referred to as a database system. Often the term "database" is also used loosely to refer to any of the DBMS, the database system or an application associated with the database.
Computer scientists may classify database management systems according to the database models that they support. Relational databases became dominant in the 1980s. These model data as rows and columns in a series of tables, and the vast majority use SQL for writing and querying data. In the 2000s, non-relational databases became popular, collectively referred to as NoSQL because they use different query languages.
Terminology and overview
Formally, a "database" refers to a set of related data and the way it is organized. Access to this data is usually provided by a "database management system" (DBMS) consisting of an integrated set of computer software that allows users to interact with one or more databases and provides access to all of the data contained in the database (although restrictions may exist that limit access to particular data). The DBMS provides various functions that allow entry, storage and retrieval of large quantities of information and provides ways to manage how that information is organized.
Because of the close relationship between them, the term "database" is often used casually to refer to both a database and the DBMS used to manipulate it.
Outside the world of professional information technology, the term database is often used to refer to any collection of related data (such as a spreadsheet or a card index) as size and usage requirements typically necessitate use of a database management system.
Existing DBMSs provide various functions that allow management of a database and its data which can be classified into four main functional groups:
Data definition – Creation, modification and removal of definitions that define the organization of the data.
Update – Insertion, modification, and deletion of the actual data.
Retrieval – Providing information in a form directly usable or for further processing by other applications. The retrieved data may be made available in a form basically the same as it is stored in the database or in a new form obtained by altering or combining existing data from the database.
Administration – Registering and monitoring users, enforcing data security, monitoring performance, maintaining data integrity, dealing with concurrency control, and recovering information that has been corrupted by some event such as an unexpected system failure.
Both a database and its DBMS conform to the principles of a particular database model. "Database system" refers collectively to the database model, database management system, and database.
Physically, database servers are dedicated computers that hold the actual databases and run only the DBMS and related software. Database servers are usually multiprocessor computers, with generous memory and RAID disk arrays used for stable storage. Hardware database accelerators, connected to one or more servers via a high-speed channel, are also used in large volume transaction processing environments. DBMSs are found at the heart of most database applications. DBMSs may be built around a custom multitasking kernel with built-in networking support, but modern DBMSs typically rely on a standard operating system to provide these functions.
Since DBMSs comprise a significant market, computer and storage vendors often take into account DBMS requirements in their own development plans.
Databases and DBMSs can be categorized according to the database model(s) that they support (such as relational or XML), the type(s) of computer they run on (from a server cluster to a mobile phone), the query language(s) used to access the database (such as SQL or XQuery), and their internal engineering, which affects performance, scalability, resilience, and security.
History
The sizes, capabilities, and performance of databases and their respective DBMSs have grown in orders of magnitude. These performance increases were enabled by the technology progress in the areas of processors, computer memory, computer storage, and computer networks. The concept of a database was made possible by the emergence of direct access storage media such as magnetic disks, which became widely available in the mid 1960s; earlier systems relied on sequential storage of data on magnetic tape. The subsequent development of database technology can be divided into three eras based on data model or structure: navigational, SQL/relational, and post-relational.
The two main early navigational data models were the hierarchical model and the CODASYL model (network model). These were characterized by the use of pointers (often physical disk addresses) to follow relationships from one record to another.
The relational model, first proposed in 1970 by Edgar F. Codd, departed from this tradition by insisting that applications should search for data by content, rather than by following links. The relational model employs sets of ledger-style tables, each used for a different type of entity. Only in the mid-1980s did computing hardware become powerful enough to allow the wide deployment of relational systems (DBMSs plus applications). By the early 1990s, however, relational systems dominated in all large-scale data processing applications, and they remain dominant: IBM DB2, Oracle, MySQL, and Microsoft SQL Server are the most searched DBMS. The dominant database language, standardised SQL for the relational model, has influenced database languages for other data models.
Object databases were developed in the 1980s to overcome the inconvenience of object–relational impedance mismatch, which led to the coining of the term "post-relational" and also the development of hybrid object–relational databases.
The next generation of post-relational databases in the late 2000s became known as NoSQL databases, introducing fast key–value stores and document-oriented databases. A competing "next generation" known as NewSQL databases attempted new implementations that retained the relational/SQL model while aiming to match the high performance of NoSQL compared to commercially available relational DBMSs.
1960s, navigational DBMS
The introduction of the term database coincided with the availability of direct-access storage (disks and drums) from the mid-1960s onwards. The term represented a contrast with the tape-based systems of the past, allowing shared interactive use rather than daily batch processing. The Oxford English Dictionary cites a 1962 report by the System Development Corporation of California as the first to use the term "data-base" in a specific technical sense.
As computers grew in speed and capability, a number of general-purpose database systems emerged; by the mid-1960s a number of such systems had come into commercial use. Interest in a standard began to grow, and Charles Bachman, author of one such product, the Integrated Data Store (IDS), founded the Database Task Group within CODASYL, the group responsible for the creation and standardization of COBOL. In 1971, the Database Task Group delivered their standard, which generally became known as the CODASYL approach, and soon a number of commercial products based on this approach entered the market.
The CODASYL approach offered applications the ability to navigate around a linked data set which was formed into a large network. Applications could find records by one of three methods:
Use of a primary key (known as a CALC key, typically implemented by hashing)
Navigating relationships (called sets) from one record to another
Scanning all the records in a sequential order
Later systems added B-trees to provide alternate access paths. Many CODASYL databases also added a declarative query language for end users (as distinct from the navigational API). However CODASYL databases were complex and required significant training and effort to produce useful applications.
IBM also had their own DBMS in 1966, known as Information Management System (IMS). IMS was a development of software written for the Apollo program on the System/360. IMS was generally similar in concept to CODASYL, but used a strict hierarchy for its model of data navigation instead of CODASYL's network model. Both concepts later became known as navigational databases due to the way data was accessed: the term was popularized by Bachman's 1973 Turing Award presentation The Programmer as Navigator. IMS is classified by IBM as a hierarchical database. IDMS and Cincom Systems' TOTAL database are classified as network databases. IMS remains in use .
1970s, relational DBMS
Edgar F. Codd worked at IBM in San Jose, California, in one of their offshoot offices that was primarily involved in the development of hard disk systems. He was unhappy with the navigational model of the CODASYL approach, notably the lack of a "search" facility. In 1970, he wrote a number of papers that outlined a new approach to database construction that eventually culminated in the groundbreaking A Relational Model of Data for Large Shared Data Banks.
In this paper, he described a new system for storing and working with large databases. Instead of records being stored in some sort of linked list of free-form records as in CODASYL, Codd's idea was to organize the data as a number of "tables", each table being used for a different type of entity. Each table would contain a fixed number of columns containing the attributes of the entity. One or more columns of each table were designated as a primary key by which the rows of the table could be uniquely identified; cross-references between tables always used these primary keys, rather than disk addresses, and queries would join tables based on these key relationships, using a set of operations based on the mathematical system of relational calculus (from which the model takes its name). Splitting the data into a set of normalized tables (or relations) aimed to ensure that each "fact" was only stored once, thus simplifying update operations. Virtual tables called views could present the data in different ways for different users, but views could not be directly updated.
Codd used mathematical terms to define the model: relations, tuples, and domains rather than tables, rows, and columns. The terminology that is now familiar came from early implementations. Codd would later criticize the tendency for practical implementations to depart from the mathematical foundations on which the model was based.
The use of primary keys (user-oriented identifiers) to represent cross-table relationships, rather than disk addresses, had two primary motivations. From an engineering perspective, it enabled tables to be relocated and resized without expensive database reorganization. But Codd was more interested in the difference in semantics: the use of explicit identifiers made it easier to define update operations with clean mathematical definitions, and it also enabled query operations to be defined in terms of the established discipline of first-order predicate calculus; because these operations have clean mathematical properties, it becomes possible to rewrite queries in provably correct ways, which is the basis of query optimization. There is no loss of expressiveness compared with the hierarchic or network models, though the connections between tables are no longer so explicit.
In the hierarchic and network models, records were allowed to have a complex internal structure. For example, the salary history of an employee might be represented as a "repeating group" within the employee record. In the relational model, the process of normalization led to such internal structures being replaced by data held in multiple tables, connected only by logical keys.
For instance, a common use of a database system is to track information about users, their name, login information, various addresses and phone numbers. In the navigational approach, all of this data would be placed in a single variable-length record. In the relational approach, the data would be normalized into a user table, an address table and a phone number table (for instance). Records would be created in these optional tables only if the address or phone numbers were actually provided.
As well as identifying rows/records using logical identifiers rather than disk addresses, Codd changed the way in which applications assembled data from multiple records. Rather than requiring applications to gather data one record at a time by navigating the links, they would use a declarative query language that expressed what data was required, rather than the access path by which it should be found. Finding an efficient access path to the data became the responsibility of the database management system, rather than the application programmer. This process, called query optimization, depended on the fact that queries were expressed in terms of mathematical logic.
Codd's paper was picked up by two people at Berkeley, Eugene Wong and Michael Stonebraker. They started a project known as INGRES using funding that had already been allocated for a geographical database project and student programmers to produce code. Beginning in 1973, INGRES delivered its first test products which were generally ready for widespread use in 1979. INGRES was similar to System R in a number of ways, including the use of a "language" for data access, known as QUEL. Over time, INGRES moved to the emerging SQL standard.
IBM itself did one test implementation of the relational model, PRTV, and a production one, Business System 12, both now discontinued. Honeywell wrote MRDS for Multics, and now there are two new implementations: Alphora Dataphor and Rel. Most other DBMS implementations usually called relational are actually SQL DBMSs.
In 1970, the University of Michigan began development of the MICRO Information Management System based on D.L. Childs' Set-Theoretic Data model. MICRO was used to manage very large data sets by the US Department of Labor, the U.S. Environmental Protection Agency, and researchers from the University of Alberta, the University of Michigan, and Wayne State University. It ran on IBM mainframe computers using the Michigan Terminal System. The system remained in production until 1998.
Integrated approach
In the 1970s and 1980s, attempts were made to build database systems with integrated hardware and software. The underlying philosophy was that such integration would provide higher performance at a lower cost. Examples were IBM System/38, the early offering of Teradata, and the Britton Lee, Inc. database machine.
Another approach to hardware support for database management was ICL's CAFS accelerator, a hardware disk controller with programmable search capabilities. In the long term, these efforts were generally unsuccessful because specialized database machines could not keep pace with the rapid development and progress of general-purpose computers. Thus most database systems nowadays are software systems running on general-purpose hardware, using general-purpose computer data storage. However, this idea is still pursued for certain applications by some companies like Netezza and Oracle (Exadata).
Late 1970s, SQL DBMS
IBM started working on a prototype system loosely based on Codd's concepts as System R in the early 1970s. The first version was ready in 1974/5, and work then started on multi-table systems in which the data could be split so that all of the data for a record (some of which is optional) did not have to be stored in a single large "chunk". Subsequent multi-user versions were tested by customers in 1978 and 1979, by which time a standardized query language – SQL – had been added. Codd's ideas were establishing themselves as both workable and superior to CODASYL, pushing IBM to develop a true production version of System R, known as SQL/DS, and, later, Database 2 (DB2).
Larry Ellison's Oracle Database (or more simply, Oracle) started from a different chain, based on IBM's papers on System R. Though Oracle V1 implementations were completed in 1978, it wasn't until Oracle Version 2 when Ellison beat IBM to market in 1979.
Stonebraker went on to apply the lessons from INGRES to develop a new database, Postgres, which is now known as PostgreSQL. PostgreSQL is often used for global mission-critical applications (the .org and .info domain name registries use it as their primary data store, as do many large companies and financial institutions).
In Sweden, Codd's paper was also read and Mimer SQL was developed from the mid-1970s at Uppsala University. In 1984, this project was consolidated into an independent enterprise.
Another data model, the entity–relationship model, emerged in 1976 and gained popularity for database design as it emphasized a more familiar description than the earlier relational model. Later on, entity–relationship constructs were retrofitted as a data modeling construct for the relational model, and the difference between the two have become irrelevant.
1980s, on the desktop
The 1980s ushered in the age of desktop computing. The new computers empowered their users with spreadsheets like Lotus 1-2-3 and database software like dBASE. The dBASE product was lightweight and easy for any computer user to understand out of the box. C. Wayne Ratliff, the creator of dBASE, stated: "dBASE was different from programs like BASIC, C, FORTRAN, and COBOL in that a lot of the dirty work had already been done. The data manipulation is done by dBASE instead of by the user, so the user can concentrate on what he is doing, rather than having to mess with the dirty details of opening, reading, and closing files, and managing space allocation." dBASE was one of the top selling software titles in the 1980s and early 1990s.
1990s, object-oriented
The 1990s, along with a rise in object-oriented programming, saw a growth in how data in various databases were handled. Programmers and designers began to treat the data in their databases as objects. That is to say that if a person's data were in a database, that person's attributes, such as their address, phone number, and age, were now considered to belong to that person instead of being extraneous data. This allows for relations between data to be relations to objects and their attributes and not to individual fields. The term "object–relational impedance mismatch" described the inconvenience of translating between programmed objects and database tables. Object databases and object–relational databases attempt to solve this problem by providing an object-oriented language (sometimes as extensions to SQL) that programmers can use as alternative to purely relational SQL. On the programming side, libraries known as object–relational mappings (ORMs) attempt to solve the same problem.
2000s, NoSQL and NewSQL
XML databases are a type of structured document-oriented database that allows querying based on XML document attributes. XML databases are mostly used in applications where the data is conveniently viewed as a collection of documents, with a structure that can vary from the very flexible to the highly rigid: examples include scientific articles, patents, tax filings, and personnel records.
NoSQL databases are often very fast, do not require fixed table schemas, avoid join operations by storing denormalized data, and are designed to scale horizontally.
In recent years, there has been a strong demand for massively distributed databases with high partition tolerance, but according to the CAP theorem it is impossible for a distributed system to simultaneously provide consistency, availability, and partition tolerance guarantees. A distributed system can satisfy any two of these guarantees at the same time, but not all three. For that reason, many NoSQL databases are using what is called eventual consistency to provide both availability and partition tolerance guarantees with a reduced level of data consistency.
NewSQL is a class of modern relational databases that aims to provide the same scalable performance of NoSQL systems for online transaction processing (read-write) workloads while still using SQL and maintaining the ACID guarantees of a traditional database system.
Use cases
Databases are used to support internal operations of organizations and to underpin online interactions with customers and suppliers (see Enterprise software).
Databases are used to hold administrative information and more specialized data, such as engineering data or economic models. Examples include computerized library systems, flight reservation systems, computerized parts inventory systems, and many content management systems that store websites as collections of webpages in a database.
Classification
One way to classify databases involves the type of their contents, for example: bibliographic, document-text, statistical, or multimedia objects. Another way is by their application area, for example: accounting, music compositions, movies, banking, manufacturing, or insurance. A third way is by some technical aspect, such as the database structure or interface type. This section lists a few of the adjectives used to characterize different kinds of databases.
An in-memory database is a database that primarily resides in main memory, but is typically backed-up by non-volatile computer data storage. Main memory databases are faster than disk databases, and so are often used where response time is critical, such as in telecommunications network equipment.
An active database includes an event-driven architecture which can respond to conditions both inside and outside the database. Possible uses include security monitoring, alerting, statistics gathering and authorization. Many databases provide active database features in the form of database triggers.
A cloud database relies on cloud technology. Both the database and most of its DBMS reside remotely, "in the cloud", while its applications are both developed by programmers and later maintained and used by end-users through a web browser and Open APIs.
Data warehouses archive data from operational databases and often from external sources such as market research firms. The warehouse becomes the central source of data for use by managers and other end-users who may not have access to operational data. For example, sales data might be aggregated to weekly totals and converted from internal product codes to use UPCs so that they can be compared with ACNielsen data. Some basic and essential components of data warehousing include extracting, analyzing, and mining data, transforming, loading, and managing data so as to make them available for further use.
A deductive database combines logic programming with a relational database.
A distributed database is one in which both the data and the DBMS span multiple computers.
A document-oriented database is designed for storing, retrieving, and managing document-oriented, or semi structured, information. Document-oriented databases are one of the main categories of NoSQL databases.
An embedded database system is a DBMS which is tightly integrated with an application software that requires access to stored data in such a way that the DBMS is hidden from the application's end-users and requires little or no ongoing maintenance.
End-user databases consist of data developed by individual end-users. Examples of these are collections of documents, spreadsheets, presentations, multimedia, and other files. Several products exist to support such databases. Some of them are much simpler than full-fledged DBMSs, with more elementary DBMS functionality.
A federated database system comprises several distinct databases, each with its own DBMS. It is handled as a single database by a federated database management system (FDBMS), which transparently integrates multiple autonomous DBMSs, possibly of different types (in which case it would also be a heterogeneous database system), and provides them with an integrated conceptual view.
Sometimes the term multi-database is used as a synonym to federated database, though it may refer to a less integrated (e.g., without an FDBMS and a managed integrated schema) group of databases that cooperate in a single application. In this case, typically middleware is used for distribution, which typically includes an atomic commit protocol (ACP), e.g., the two-phase commit protocol, to allow distributed (global) transactions across the participating databases.
A graph database is a kind of NoSQL database that uses graph structures with nodes, edges, and properties to represent and store information. General graph databases that can store any graph are distinct from specialized graph databases such as triplestores and network databases.
An array DBMS is a kind of NoSQL DBMS that allows modeling, storage, and retrieval of (usually large) multi-dimensional arrays such as satellite images and climate simulation output.
In a hypertext or hypermedia database, any word or a piece of text representing an object, e.g., another piece of text, an article, a picture, or a film, can be hyperlinked to that object. Hypertext databases are particularly useful for organizing large amounts of disparate information. For example, they are useful for organizing online encyclopedias, where users can conveniently jump around the text. The World Wide Web is thus a large distributed hypertext database.
A knowledge base (abbreviated KB, kb or Δ) is a special kind of database for knowledge management, providing the means for the computerized collection, organization, and retrieval of knowledge. Also a collection of data representing problems with their solutions and related experiences.
A mobile database can be carried on or synchronized from a mobile computing device.
Operational databases store detailed data about the operations of an organization. They typically process relatively high volumes of updates using transactions. Examples include customer databases that record contact, credit, and demographic information about a business's customers, personnel databases that hold information such as salary, benefits, skills data about employees, enterprise resource planning systems that record details about product components, parts inventory, and financial databases that keep track of the organization's money, accounting and financial dealings.
A parallel database seeks to improve performance through parallelization for tasks such as loading data, building indexes and evaluating queries.
The major parallel DBMS architectures which are induced by the underlying hardware architecture are:
Shared memory architecture, where multiple processors share the main memory space, as well as other data storage.
Shared disk architecture, where each processing unit (typically consisting of multiple processors) has its own main memory, but all units share the other storage.
Shared-nothing architecture, where each processing unit has its own main memory and other storage.
Probabilistic databases employ fuzzy logic to draw inferences from imprecise data.
Real-time databases process transactions fast enough for the result to come back and be acted on right away.
A spatial database can store the data with multidimensional features. The queries on such data include location-based queries, like "Where is the closest hotel in my area?".
A temporal database has built-in time aspects, for example a temporal data model and a temporal version of SQL. More specifically the temporal aspects usually include valid-time and transaction-time.
A terminology-oriented database builds upon an object-oriented database, often customized for a specific field.
An unstructured data database is intended to store in a manageable and protected way diverse objects that do not fit naturally and conveniently in common databases. It may include email messages, documents, journals, multimedia objects, etc. The name may be misleading since some objects can be highly structured. However, the entire possible object collection does not fit into a predefined structured framework. Most established DBMSs now support unstructured data in various ways, and new dedicated DBMSs are emerging.
Database management system
Connolly and Begg define database management system (DBMS) as a "software system that enables users to define, create, maintain and control access to the database". Examples of DBMS's include MySQL, PostgreSQL, Microsoft SQL Server, Oracle Database, and Microsoft Access.
The DBMS acronym is sometimes extended to indicate the underlying database model, with RDBMS for the relational, OODBMS for the object (oriented) and ORDBMS for the object–relational model. Other extensions can indicate some other characteristic, such as DDBMS for a distributed database management systems.
The functionality provided by a DBMS can vary enormously. The core functionality is the storage, retrieval and update of data. Codd proposed the following functions and services a fully-fledged general purpose DBMS should provide:
Data storage, retrieval and update
User accessible catalog or data dictionary describing the metadata
Support for transactions and concurrency
Facilities for recovering the database should it become damaged
Support for authorization of access and update of data
Access support from remote locations
Enforcing constraints to ensure data in the database abides by certain rules
It is also generally to be expected the DBMS will provide a set of utilities for such purposes as may be necessary to administer the database effectively, including import, export, monitoring, defragmentation and analysis utilities. The core part of the DBMS interacting between the database and the application interface sometimes referred to as the database engine.
Often DBMSs will have configuration parameters that can be statically and dynamically tuned, for example the maximum amount of main memory on a server the database can use. The trend is to minimize the amount of manual configuration, and for cases such as embedded databases the need to target zero-administration is paramount.
The large major enterprise DBMSs have tended to increase in size and functionality and can have involved thousands of human years of development effort through their lifetime.
Early multi-user DBMS typically only allowed for the application to reside on the same computer with access via terminals or terminal emulation software. The client–server architecture was a development where the application resided on a client desktop and the database on a server allowing the processing to be distributed. This evolved into a multitier architecture incorporating application servers and web servers with the end user interface via a web browser with the database only directly connected to the adjacent tier.
A general-purpose DBMS will provide public application programming interfaces (API) and optionally a processor for database languages such as SQL to allow applications to be written to interact with the database. A special purpose DBMS may use a private API and be specifically customized and linked to a single application. For example, an email system performing many of the functions of a general-purpose DBMS such as message insertion, message deletion, attachment handling, blocklist lookup, associating messages an email address and so forth however these functions are limited to what is required to handle email.
Application
External interaction with the database will be via an application program that interfaces with the DBMS. This can range from a database tool that allows users to execute SQL queries textually or graphically, to a web site that happens to use a database to store and search information.
Application program interface
A programmer will code interactions to the database (sometimes referred to as a datasource) via an application program interface (API) or via a database language. The particular API or language chosen will need to be supported by DBMS, possible indirectly via a preprocessor or a bridging API. Some API's aim to be database independent, ODBC being a commonly known example. Other common API's include JDBC and ADO.NET.
Database languages
Database languages are special-purpose languages, which allow one or more of the following tasks, sometimes distinguished as sublanguages:
Data control language (DCL) – controls access to data;
Data definition language (DDL) – defines data types such as creating, altering, or dropping tables and the relationships among them;
Data manipulation language (DML) – performs tasks such as inserting, updating, or deleting data occurrences;
Data query language (DQL) – allows searching for information and computing derived information.
Database languages are specific to a particular data model. Notable examples include:
SQL combines the roles of data definition, data manipulation, and query in a single language. It was one of the first commercial languages for the relational model, although it departs in some respects from the relational model as described by Codd (for example, the rows and columns of a table can be ordered). SQL became a standard of the American National Standards Institute (ANSI) in 1986, and of the International Organization for Standardization (ISO) in 1987. The standards have been regularly enhanced since and is supported (with varying degrees of conformance) by all mainstream commercial relational DBMSs.
OQL is an object model language standard (from the Object Data Management Group). It has influenced the design of some of the newer query languages like JDOQL and EJB QL.
XQuery is a standard XML query language implemented by XML database systems such as MarkLogic and eXist, by relational databases with XML capability such as Oracle and DB2, and also by in-memory XML processors such as Saxon.
SQL/XML combines XQuery with SQL.
A database language may also incorporate features like:
DBMS-specific configuration and storage engine management
Computations to modify query results, like counting, summing, averaging, sorting, grouping, and cross-referencing
Constraint enforcement (e.g. in an automotive database, only allowing one engine type per car)
Application programming interface version of the query language, for programmer convenience
Storage
Database storage is the container of the physical materialization of a database. It comprises the internal (physical) level in the database architecture. It also contains all the information needed (e.g., metadata, "data about the data", and internal data structures) to reconstruct the conceptual level and external level from the internal level when needed. Databases as digital objects contain three layers of information which must be stored: the data, the structure, and the semantics. Proper storage of all three layers is needed for future preservation and longevity of the database. Putting data into permanent storage is generally the responsibility of the database engine a.k.a. "storage engine". Though typically accessed by a DBMS through the underlying operating system (and often using the operating systems' file systems as intermediates for storage layout), storage properties and configuration setting are extremely important for the efficient operation of the DBMS, and thus are closely maintained by database administrators. A DBMS, while in operation, always has its database residing in several types of storage (e.g., memory and external storage). The database data and the additional needed information, possibly in very large amounts, are coded into bits. Data typically reside in the storage in structures that look completely different from the way the data look in the conceptual and external levels, but in ways that attempt to optimize (the best possible) these levels' reconstruction when needed by users and programs, as well as for computing additional types of needed information from the data (e.g., when querying the database).
Some DBMSs support specifying which character encoding was used to store data, so multiple encodings can be used in the same database.
Various low-level database storage structures are used by the storage engine to serialize the data model so it can be written to the medium of choice. Techniques such as indexing may be used to improve performance. Conventional storage is row-oriented, but there are also column-oriented and correlation databases.
Materialized views
Often storage redundancy is employed to increase performance. A common example is storing materialized views, which consist of frequently needed external views or query results. Storing such views saves the expensive computing of them each time they are needed. The downsides of materialized views are the overhead incurred when updating them to keep them synchronized with their original updated database data, and the cost of storage redundancy.
Replication
Occasionally a database employs storage redundancy by database objects replication (with one or more copies) to increase data availability (both to improve performance of simultaneous multiple end-user accesses to a same database object, and to provide resiliency in a case of partial failure of a distributed database). Updates of a replicated object need to be synchronized across the object copies. In many cases, the entire database is replicated.
Security
Database security deals with all various aspects of protecting the database content, its owners, and its users. It ranges from protection from intentional unauthorized database uses to unintentional database accesses by unauthorized entities (e.g., a person or a computer program).
Database access control deals with controlling who (a person or a certain computer program) is allowed to access what information in the database. The information may comprise specific database objects (e.g., record types, specific records, data structures), certain computations over certain objects (e.g., query types, or specific queries), or using specific access paths to the former (e.g., using specific indexes or other data structures to access information). Database access controls are set by special authorized (by the database owner) personnel that uses dedicated protected security DBMS interfaces.
This may be managed directly on an individual basis, or by the assignment of individuals and privileges to groups, or (in the most elaborate models) through the assignment of individuals and groups to roles which are then granted entitlements. Data security prevents unauthorized users from viewing or updating the database. Using passwords, users are allowed access to the entire database or subsets of it called "subschemas". For example, an employee database can contain all the data about an individual employee, but one group of users may be authorized to view only payroll data, while others are allowed access to only work history and medical data. If the DBMS provides a way to interactively enter and update the database, as well as interrogate it, this capability allows for managing personal databases.
Data security in general deals with protecting specific chunks of data, both physically (i.e., from corruption, or destruction, or removal; e.g., see physical security), or the interpretation of them, or parts of them to meaningful information (e.g., by looking at the strings of bits that they comprise, concluding specific valid credit-card numbers; e.g., see data encryption).
Change and access logging records who accessed which attributes, what was changed, and when it was changed. Logging services allow for a forensic database audit later by keeping a record of access occurrences and changes. Sometimes application-level code is used to record changes rather than leaving this to the database. Monitoring can be set up to attempt to detect security breaches.
Transactions and concurrency
Database transactions can be used to introduce some level of fault tolerance and data integrity after recovery from a crash. A database transaction is a unit of work, typically encapsulating a number of operations over a database (e.g., reading a database object, writing, acquiring or releasing a lock, etc.), an abstraction supported in database and also other systems. Each transaction has well defined boundaries in terms of which program/code executions are included in that transaction (determined by the transaction's programmer via special transaction commands).
The acronym ACID describes some ideal properties of a database transaction: atomicity, consistency, isolation, and durability.
Migration
A database built with one DBMS is not portable to another DBMS (i.e., the other DBMS cannot run it). However, in some situations, it is desirable to migrate a database from one DBMS to another. The reasons are primarily economical (different DBMSs may have different total costs of ownership or TCOs), functional, and operational (different DBMSs may have different capabilities). The migration involves the database's transformation from one DBMS type to another. The transformation should maintain (if possible) the database related application (i.e., all related application programs) intact. Thus, the database's conceptual and external architectural levels should be maintained in the transformation. It may be desired that also some aspects of the architecture internal level are maintained. A complex or large database migration may be a complicated and costly (one-time) project by itself, which should be factored into the decision to migrate. This in spite of the fact that tools may exist to help migration between specific DBMSs. Typically, a DBMS vendor provides tools to help importing databases from other popular DBMSs.
Building, maintaining, and tuning
After designing a database for an application, the next stage is building the database. Typically, an appropriate general-purpose DBMS can be selected to be used for this purpose. A DBMS provides the needed user interfaces to be used by database administrators to define the needed application's data structures within the DBMS's respective data model. Other user interfaces are used to select needed DBMS parameters (like security related, storage allocation parameters, etc.).
When the database is ready (all its data structures and other needed components are defined), it is typically populated with initial application's data (database initialization, which is typically a distinct project; in many cases using specialized DBMS interfaces that support bulk insertion) before making it operational. In some cases, the database becomes operational while empty of application data, and data are accumulated during its operation.
After the database is created, initialized and populated it needs to be maintained. Various database parameters may need changing and the database may need to be tuned (tuning) for better performance; application's data structures may be changed or added, new related application programs may be written to add to the application's functionality, etc.
Backup and restore
Sometimes it is desired to bring a database back to a previous state (for many reasons, e.g., cases when the database is found corrupted due to a software error, or if it has been updated with erroneous data). To achieve this, a backup operation is done occasionally or continuously, where each desired database state (i.e., the values of its data and their embedding in database's data structures) is kept within dedicated backup files (many techniques exist to do this effectively). When it is decided by a database administrator to bring the database back to this state (e.g., by specifying this state by a desired point in time when the database was in this state), these files are used to restore that state.
Static analysis
Static analysis techniques for software verification can be applied also in the scenario of query languages. In particular, the *Abstract interpretation framework has been extended to the field of query languages for relational databases as a way to support sound approximation techniques. The semantics of query languages can be tuned according to suitable abstractions of the concrete domain of data. The abstraction of relational database system has many interesting applications, in particular, for security purposes, such as fine grained access control, watermarking, etc.
Miscellaneous features
Other DBMS features might include:
Database logs – This helps in keeping a history of the executed functions.
Graphics component for producing graphs and charts, especially in a data warehouse system.
Query optimizer – Performs query optimization on every query to choose an efficient query plan (a partial order (tree) of operations) to be executed to compute the query result. May be specific to a particular storage engine.
Tools or hooks for database design, application programming, application program maintenance, database performance analysis and monitoring, database configuration monitoring, DBMS hardware configuration (a DBMS and related database may span computers, networks, and storage units) and related database mapping (especially for a distributed DBMS), storage allocation and database layout monitoring, storage migration, etc.
Increasingly, there are calls for a single system that incorporates all of these core functionalities into the same build, test, and deployment framework for database management and source control. Borrowing from other developments in the software industry, some market such offerings as "DevOps for database".
Design and modeling
The first task of a database designer is to produce a conceptual data model that reflects the structure of the information to be held in the database. A common approach to this is to develop an entity–relationship model, often with the aid of drawing tools. Another popular approach is the Unified Modeling Language. A successful data model will accurately reflect the possible state of the external world being modeled: for example, if people can have more than one phone number, it will allow this information to be captured. Designing a good conceptual data model requires a good understanding of the application domain; it typically involves asking deep questions about the things of interest to an organization, like "can a customer also be a supplier?", or "if a product is sold with two different forms of packaging, are those the same product or different products?", or "if a plane flies from New York to Dubai via Frankfurt, is that one flight or two (or maybe even three)?". The answers to these questions establish definitions of the terminology used for entities (customers, products, flights, flight segments) and their relationships and attributes.
Producing the conceptual data model sometimes involves input from business processes, or the analysis of workflow in the organization. This can help to establish what information is needed in the database, and what can be left out. For example, it can help when deciding whether the database needs to hold historic data as well as current data.
Having produced a conceptual data model that users are happy with, the next stage is to translate this into a schema that implements the relevant data structures within the database. This process is often called logical database design, and the output is a logical data model expressed in the form of a schema. Whereas the conceptual data model is (in theory at least) independent of the choice of database technology, the logical data model will be expressed in terms of a particular database model supported by the chosen DBMS. (The terms data model and database model are often used interchangeably, but in this article we use data model for the design of a specific database, and database model for the modeling notation used to express that design).
The most popular database model for general-purpose databases is the relational model, or more precisely, the relational model as represented by the SQL language. The process of creating a logical database design using this model uses a methodical approach known as normalization. The goal of normalization is to ensure that each elementary "fact" is only recorded in one place, so that insertions, updates, and deletions automatically maintain consistency.
The final stage of database design is to make the decisions that affect performance, scalability, recovery, security, and the like, which depend on the particular DBMS. This is often called physical database design, and the output is the physical data model. A key goal during this stage is data independence, meaning that the decisions made for performance optimization purposes should be invisible to end-users and applications. There are two types of data independence: Physical data independence and logical data independence. Physical design is driven mainly by performance requirements, and requires a good knowledge of the expected workload and access patterns, and a deep understanding of the features offered by the chosen DBMS.
Another aspect of physical database design is security. It involves both defining access control to database objects as well as defining security levels and methods for the data itself.
Models
A database model is a type of data model that determines the logical structure of a database and fundamentally determines in which manner data can be stored, organized, and manipulated. The most popular example of a database model is the relational model (or the SQL approximation of relational), which uses a table-based format.
Common logical data models for databases include:
Navigational databases
Hierarchical database model
Network model
Graph database
Relational model
Entity–relationship model
Enhanced entity–relationship model
Object model
Document model
Entity–attribute–value model
Star schema
An object–relational database combines the two related structures.
Physical data models include:
Inverted index
Flat file
Other models include:
Multidimensional model
Array model
Multivalue model
Specialized models are optimized for particular types of data:
XML database
Semantic model
Content store
Event store
Time series model
External, conceptual, and internal views
A database management system provides three views of the database data:
The external level defines how each group of end-users sees the organization of data in the database. A single database can have any number of views at the external level.
The conceptual level unifies the various external views into a compatible global view. It provides the synthesis of all the external views. It is out of the scope of the various database end-users, and is rather of interest to database application developers and database administrators.
The internal level (or physical level) is the internal organization of data inside a DBMS. It is concerned with cost, performance, scalability and other operational matters. It deals with storage layout of the data, using storage structures such as indexes to enhance performance. Occasionally it stores data of individual views (materialized views), computed from generic data, if performance justification exists for such redundancy. It balances all the external views' performance requirements, possibly conflicting, in an attempt to optimize overall performance across all activities.
While there is typically only one conceptual (or logical) and physical (or internal) view of the data, there can be any number of different external views. This allows users to see database information in a more business-related way rather than from a technical, processing viewpoint. For example, a financial department of a company needs the payment details of all employees as part of the company's expenses, but does not need details about employees that are the interest of the human resources department. Thus different departments need different views of the company's database.
The three-level database architecture relates to the concept of data independence which was one of the major initial driving forces of the relational model. The idea is that changes made at a certain level do not affect the view at a higher level. For example, changes in the internal level do not affect application programs written using conceptual level interfaces, which reduces the impact of making physical changes to improve performance.
The conceptual view provides a level of indirection between internal and external. On one hand it provides a common view of the database, independent of different external view structures, and on the other hand it abstracts away details of how the data are stored or managed (internal level). In principle every level, and even every external view, can be presented by a different data model. In practice usually a given DBMS uses the same data model for both the external and the conceptual levels (e.g., relational model). The internal level, which is hidden inside the DBMS and depends on its implementation, requires a different level of detail and uses its own types of data structure types.
Separating the external, conceptual and internal levels was a major feature of the relational database model implementations that dominate 21st century databases.
Research
Database technology has been an active research topic since the 1960s, both in academia and in the research and development groups of companies (for example IBM Research). Research activity includes theory and development of prototypes. Notable research topics have included models, the atomic transaction concept, and related concurrency control techniques, query languages and query optimization methods, RAID, and more.
The database research area has several dedicated academic journals (for example, ACM Transactions on Database Systems-TODS, Data and Knowledge Engineering-DKE) and annual conferences (e.g., ACM SIGMOD, ACM PODS, VLDB, IEEE ICDE).
See also
Comparison of database tools
Comparison of object database management systems
Comparison of object–relational database management systems
Comparison of relational database management systems
Data hierarchy
Data bank
Data store
Database theory
Database testing
Database-centric architecture
Flat-file database
INP (database)
Journal of Database Management
Question-focused dataset
Notes
References
Sources
Further reading
Ling Liu and Tamer M. Özsu (Eds.) (2009). "Encyclopedia of Database Systems, 4100 p. 60 illus. .
Gray, J. and Reuter, A. Transaction Processing: Concepts and Techniques, 1st edition, Morgan Kaufmann Publishers, 1992.
Kroenke, David M. and David J. Auer. Database Concepts. 3rd ed. New York: Prentice, 2007.
Raghu Ramakrishnan and Johannes Gehrke, Database Management Systems
Abraham Silberschatz, Henry F. Korth, S. Sudarshan, Database System Concepts
Teorey, T.; Lightstone, S. and Nadeau, T. Database Modeling & Design: Logical Design, 4th edition, Morgan Kaufmann Press, 2005.
External links
DB File extension – information about files with the DB extension |
8484 | https://en.wikipedia.org/wiki/Deus%20Ex%20%28video%20game%29 | Deus Ex (video game) | Deus Ex is a 2000 action role-playing game developed by Ion Storm and published by Eidos Interactive. Set in a cyberpunk-themed dystopian world in the year 2052, the game follows JC Denton, an agent of the fictional agency United Nations Anti-Terrorist Coalition (UNATCO), who is given superhuman abilities by nanotechnology, as he sets out to combat hostile forces in a world ravaged by inequality and a deadly plague. His missions entangle him in a conspiracy that brings him into conflict with the Triads, Majestic 12, and the Illuminati.
Deus Exs gameplay combines elements of the first-person shooter with stealth elements, adventure, and role-playing genres, allowing for its tasks and missions to be completed in a variety of ways, which in turn lead to differing outcomes. Presented from the first-person perspective, the player can customize Denton's various abilities such as weapon skills or lockpicking, increasing his effectiveness in these areas; this opens up different avenues of exploration and methods of interacting with or manipulating other characters. The player can complete side missions away from the primary storyline by moving freely around the available areas, which can reward the player with experience points to upgrade abilities and alternative ways to tackle main missions.
Powered by the Unreal Engine, the game was released for Microsoft Windows in June 2000, with a Mac OS port following the next month. A modified version of the game was released for the PlayStation 2 in 2002 as Deus Ex: The Conspiracy. In the years following its release, Deus Ex has received additional improvements and content from its fan community.
The game received critical acclaim, including being named "Best PC Game of All Time" in PC Gamers "Top 100 PC Games" in 2011 and a poll carried out by the UK gaming magazine PC Zone. It received several Game of the Year awards, drawing praise for its pioneering designs in player choice and multiple narrative paths. Deus Ex has been regarded as one of the best video games of all time. It has sold more than 1 million copies, as of April 23, 2009. The game led to a series, which includes the sequel Deus Ex: Invisible War (2003), and three prequels: Deus Ex: Human Revolution (2011), Deus Ex: The Fall (2013), and Deus Ex: Mankind Divided (2016).
Gameplay
Deus Ex incorporates elements from four video game genres: role-playing, first-person shooter, adventure, and "immersive simulation", the last of which being a game where "nothing reminds you that you're just playing a game." For example, the game uses a first-person camera during gameplay and includes exploration and character interaction as primary features.
The player assumes the role of JC Denton, a nanotech-augmented operative of the United Nations Anti-Terrorist Coalition (UNATCO). This nanotechnology is a central gameplay mechanism and allows players to perform superhuman feats.
Role-playing elements
As the player accomplishes objectives, the player character is rewarded with "skill points". Skill points are used to enhance a character's abilities in eleven different areas, and were designed to provide players with a way to customize their characters; a player might create a combat-focused character by increasing proficiency with pistols or rifles, while a more furtive character can be created by focusing on lock picking and computer hacking abilities. There are four different levels of proficiency in each skill, with the skill point cost increasing for each successive level.
Weapons may be customized through "weapon modifications", which can be found or purchased throughout the game. The player might add scopes, silencers, or laser sights; increase the weapon's range, accuracy, or magazine size; or decrease its recoil and reload time; as appropriate to the weapon type.
Players are further encouraged to customize their characters through nano-augmentations—cybernetic devices that grant characters superhuman powers. While the game contains eighteen different nano-augmentations, the player can install a maximum of nine, as each must be used on a certain part of the body: one in the arms, legs, eyes, and head; two underneath the skin; and three in the torso. This forces the player to choose carefully between the benefits offered by each augmentation. For example, the arm augmentation requires the player to decide between boosting their character's skill in hand-to-hand combat or his ability to lift heavy objects.
Interaction with non-player characters (NPCs) was a significant design focus. When the player interacts with a non-player character, the game will enter a cutscene-like conversation mode where the player advances the conversation by selecting from a list of dialogue options. The player's choices often have a substantial effect on both gameplay and plot, as non-player characters will react in different ways depending on the selected answer (e.g., rudeness makes them less likely to assist the player).
Combat elements
Deus Ex features combat similar to first-person shooters, with real-time action, a first-person perspective, and reflex-based gameplay. As the player will often encounter enemies in groups, combat often tends toward a tactical approach, including the use of cover, strafing, and "hit-and-run". A USA Today reviewer found, "At the easiest difficulty setting, your character is puréed again and again by an onslaught of human and robotic terrorists until you learn the value of stealth." However, through the game's role-playing systems, it is possible to develop a character's skills and augmentations to create a tank-like combat specialist with the ability to deal and absorb large amounts of damage. Non-player characters will praise or criticize the main character depending on the use of force, incorporating a moral element into the gameplay.
Deus Ex features a head-up display crosshair, whose size dynamically shows where shots will fall based on movement, aim, and the weapon in use; the reticle expands while the player is moving or shifting their aim, and slowly shrinks to its original size while no actions are taken. How quickly the reticle shrinks depends on the character's proficiency with the equipped weapon, the number of accuracy modifications added to the weapon, and the level of the "Targeting" nano-augmentation.
Deus Ex features twenty-four weapons, ranging from crowbars, electroshock weapons, and riot baton, to laser-guided anti-tank rockets and assault rifles; both lethal and non-lethal weapons are available. The player can also make use of several weapons of opportunity, such as fire extinguishers.
Player choice
Gameplay in Deus Ex emphasizes player choice. Objectives can be completed in numerous ways, including stealth, sniping, heavy frontal assault, dialogue, or engineering and computer hacking. This level of freedom requires that levels, characters, and puzzles be designed with significant redundancy, as a single play-through of the game will miss large sections of dialogue, areas, and other content. In some missions, the player is encouraged to avoid using deadly force, and specific aspects of the story may change depending on how violent or non-violent the player chooses to be. The game is also unusual in that two of its boss villains can be killed off early in the game, or left alive to be defeated later, and this too affects how other characters interact with the player.
Because of its design focus on player choice, Deus Ex has been compared with System Shock, a game that inspired its design. Together, these factors give the game a high degree of replayability, as the player will have vastly different experiences, depending on which methods they use to accomplish objectives.
Multiplayer
Deus Ex was designed as a single-player game, and the initial releases of the Windows and Macintosh versions of the game did not include multiplayer functionality. Support for multiplayer modes was later incorporated through patches. The component consists of three game modes: deathmatch, basic team deathmatch, and advanced team deathmatch. Five maps, based on levels from the single-player portion of the game, were included with the original multiplayer patch, but many user-created maps exist, while also many features of the single-player game missing in multiplayer have been re-introduced by various user RPG modifications. The PlayStation 2 release of Deus Ex does not offer a multiplayer mode. In April 2014 it was announced that GameSpy would cease their masterserver services, also affecting Deus Ex. A community-made patch for the multiplayer mode has been created as a response to this.
Synopsis
Setting and characters
Deus Ex takes place in 2052, in an alternate history where real-world conspiracy theories are true. These include speculations regarding black helicopters, vaccinations, and FEMA, as well as Area 51, the ECHELON network, Men in Black, chupacabras (in the form of "greasels"), and grey aliens. Mysterious groups such as Majestic 12, the Illuminati, the Knights Templar, the Bilderberg Group, and the Trilateral Commission also either play a central part in the plot or are alluded to during the course of the game.
The plot of Deus Ex depicts a society on a slow spiral into chaos. There is a massive division between the rich and the poor, not only socially, but in some cities physically. A lethal pandemic, known as the "Gray Death", ravages the world's population, especially within the United States, and has no cure. A synthetic vaccine, "Ambrosia", manufactured by the company VersaLife, nullifies the effects of the virus but is in critically short supply. Because of its scarcity, Ambrosia is available only to those deemed "vital to the social order", and finds its way primarily to government officials, military personnel, the rich and influential, scientists, and the intellectual elite. With no hope for the common people of the world, riots occur worldwide, and some terrorist organizations have formed with the professed intent of assisting the downtrodden, among them the National Secessionist Forces (NSF) of the U.S. and a French group known as Silhouette.
To combat these threats to the world order, the United Nations has expanded its influence around the globe to form the United Nations Anti-Terrorist Coalition (UNATCO). It is headquartered near New York City in a bunker beneath Liberty Island, placed there after a terrorist strike on the Statue of Liberty.
The main character of Deus Ex is UNATCO agent JC Denton (voiced by Jay Franke), one of the first in a new line of agents physically altered with advanced nanotechnology to gain superhuman abilities, alongside his brother Paul (also voiced by Jay Franke), who joined UNATCO to avenge his parents' deaths at the hands of Majestic 12. His UNATCO colleagues include the mechanically-augmented and ruthlessly efficient field agents Gunther Hermann and Anna Navarre; Quartermaster General Sam Carter, and the bureaucratic UNATCO chief Joseph Manderley. UNATCO communications tech Alex Jacobson's character model and name are based on Warren Spector's nephew, Alec Jacobson.
JC's missions bring him into contact with various characters, including NSF leader Juan Lebedev, hacker and scientist Tracer Tong, nano-tech expert Gary Savage, Nicolette DuClare (daughter of an Illuminati member), former Illuminati leader Morgan Everett, the Artificial Intelligences (AI) Daedalus and Icarus, and Bob Page, owner of VersaLife and leader of Majestic 12, a clandestine organization that has usurped the infrastructure of the Illuminati, allowing him to control the world for his own ends.
Plot
After completing his training, UNATCO agent JC Denton takes several missions given by Director Joseph Manderley to track down members of the National Secessionist Forces (NSF) and their stolen shipments of the Ambrosia vaccine, the treatment for the Gray Death virus. Through these missions, JC is reunited with his brother, Paul, who is also nano-augmented. JC tracks the Ambrosia shipment to a private terminal at LaGuardia Airport. Paul meets JC outside the plane and explains that he has defected from UNATCO and is working with the NSF after learning that the Gray Death is a human-made virus, with UNATCO using its power to make sure only the elite receive the vaccine.
JC returns to UNATCO headquarters and is told by Manderley that both he and Paul have been outfitted with a 24-hour kill switch and that Paul's has been activated due to his betrayal. Manderley orders JC to fly to Hong Kong to eliminate Tracer Tong, a hacker whom Paul has contact with, and who can disable the kill switches. Instead, JC returns to Paul's apartment to find Paul hiding inside. Paul further explains his defection and encourages JC to also defect by sending out a distress call to alert the NSF's allies. Upon doing so, JC becomes a wanted man by UNATCO, and his kill switch is activated by Federal Emergency Management Agency (FEMA) Director Walton Simons. JC is unable to escape UNATCO forces, and both he and Paul (provided he survived the raid on the apartment) are taken to a secret prison below UNATCO headquarters. An entity named "Daedalus" contacts JC and informs him that the prison is part of Majestic 12, and arranges for him and Paul to escape. The two flee to Hong Kong to meet with Tong, who deactivates their kill switches. Tong requests that JC infiltrate the VersaLife building. Doing so, JC discovers that the corporation is the source for the Gray Death, and he can steal the plans for the virus and destroy the universal constructor (UC) that produces it.
Analysis of the virus shows that its structure was designed by the Illuminati, prompting Tong to send JC to Paris to obtain their help fighting Majestic 12. JC meets with Illuminati leader Morgan Everett and learns that the technology behind the Gray Death was intended to be used for augmentation, but Majestic 12, led by trillionaire businessman and former Illuminatus Bob Page, stole and repurposed it. Everett recognizes that without VersaLife's UC, Majestic 12 can no longer create the virus, and will likely target Vandenberg Air Force Base, where X-51, a group of former Area 51 scientists, have built another one. After aiding the base personnel in repelling a Majestic 12 attack, JC meets X-51 leader Gary Savage, who reveals that Daedalus is an artificial intelligence (AI) borne out of the ECHELON program.
Everett attempts to gain control over Majestic 12's communications network by releasing Daedalus onto the U.S. military networks, but Page counters by releasing his own AI, Icarus. Icarus merges with Daedalus to form a new AI, Helios, which can control all global communications. Savage enlists JC's help in procuring schematics for reconstructing components for the UC that were damaged during Majestic 12's raid of Vandenberg. JC finds the schematics and transmits them to Savage. Page intercepts the transmission and launches a nuclear missile at Vandenberg to ensure that Area 51, now Majestic 12's headquarters, will be the only location in the world with an operational UC. However, JC can reprogram the missile to strike Area 51.
JC travels to Area 51 to confront Page. Page reveals that he seeks to merge with Helios and gain full control over nanotechnology. JC is contacted by Tong, Everett, and the Helios AI simultaneously. All three factions ask for his help in defeating Page while furthering their own objectives. Tong seeks to plunge the world into a Dark Age by destroying the global communications hub and preventing anyone from taking control of the world. Everett offers Denton the chance to return the Illuminati to power by killing Page and using the Area 51 technology to rule the world with an invisible hand. Helios wishes to merge with Denton and rule the world as a benevolent dictator with infinite knowledge and reason. The player's decision determines the future and brings the game to a close.
Development
After Looking Glass Technologies and Origin Systems released Ultima Underworld II: Labyrinth of Worlds in January 1993, producer Warren Spector began to plan Troubleshooter, the game that would become Deus Ex. In his 1994 proposal, he described the concept as "Underworld-style first-person action" in a real-world setting with "big-budget, nonstop action". After Spector and his team were laid off from Looking Glass, John Romero of Ion Storm offered him the chance to make his "dream game" without any restrictions.
Preproduction for Deus Ex began around August 1997 and lasted roughly six months. The project's budget was $5 million to $7 million. The game's working title was Shooter: Majestic Revelations, and it was scheduled for release on Christmas 1998. The team developed the setting before the game mechanics. Noticing his wife's fascination with The X-Files, Spector connected the "real world, millennial weirdness, [and] conspiracy" topics on his mind and decided to make a game about them that would appeal to a broad audience. The Shooter design document cast the player as an augmented agent working against an elite cabal in the "dangerous and chaotic" 2050s. It cited Half-Life, Fallout, Thief: The Dark Project, and GoldenEye 007 as game design influences, and used the stories and settings of Colossus: The Forbin Project, The Manchurian Candidate, Robocop, The X-Files, and Men in Black as reference points. The team designed a skill system that featured "special powers" derived from nanotechnological augmentation and avoided the inclusion of die rolling and skills that required micromanagement. Spector also cited Konami's 1995 role-playing video game Suikoden as an inspiration, stating that the limited choices in Suikoden inspired him to expand on the idea with more meaningful choices in Deux Ex.
In early 1998, the Deus Ex team grew to 20 people, and the game entered a 28-month production phase. The development team consisted of three programmers, six designers, seven artists, a writer, an associate producer, a "tech", and Spector. Two writers and four testers were hired as contractors. Chris Norden was the lead programmer and assistant director, Harvey Smith the lead designer, Jay Lee the lead artist, and Sheldon Pacotti the lead writer. Close friends of the team who understood the intentions behind the game were invited to playtest and give feedback. The wide range of input led to debates in the office and changes to the game. Spector later concluded that the team was "blinded by promises of complete creative freedom", and by their belief that the game would have no budget, marketing, or time restraints. By mid-1998, the game's title had become Deus Ex, derived from the Latin literary device deus ex machina ("god from the machine"), in which a plot is resolved by an unpredictable intervention.
Spector felt that the best aspects of Deus Exs development were the "high-level vision" and length of preproduction, flexibility within the project, testable "proto-missions", and the Unreal Engine license. The team's pitfalls included the management structure, unrealistic goals, underestimating risks with artificial intelligence, their handling of proto-missions, and weakened morale from bad press. Deus Ex was released on June 23, 2000, and published by Eidos Interactive for Microsoft Windows. The team planned third-party ports for Mac OS 9 and Linux.
Design
The original 1997 design document for Deus Ex privileges character development over all other features. The game was designed to be "genre-busting": in parts simulation, role-playing, first-person shooter, and adventure. The team wanted players to consider "who they wanted to be" in the game, and for that to alter how they behaved in the game. In this way, the game world was "deeply simulated", or realistic and believable enough that the player would solve problems in creative, emergent ways without noticing distinct puzzles. However, the simulation ultimately failed to maintain the desired level of openness, and they had to brute force "skill", "action", and "character interaction" paths through each level. Playtesting also revealed that their idea of a role-playing game based on the real world was more interesting in theory than in reality, as certain aspects of the real world, such as hotels and office buildings, were not compelling in a game.
The game's story changed considerably during production, but the idea of an augmented counterterrorist protagonist named JC Denton remained throughout. Though Spector originally pictured Deus Ex as akin to The X-Files, lead writer Sheldon Pacotti felt that it ended up more like James Bond. Spector wrote that the team overextended itself by planning highly elaborate scenes. Designer Harvey Smith removed a mostly complete White House level due to its complexity and production needs. Finished digital assets were repurposed or abandoned by the team. Pete Davison of USgamer referred to the White House and presidential bunker as "the truly deleted scenes of Deus Ex lost levels".
One of the things that Spector wanted to achieve in Deus Ex was to make JC Denton a cipher for the player, to create a better immersion and gameplay experience. He did not want the character to force any emotion, so that whatever feelings the player may be experiencing come from themselves rather than from JC Denton. To do this, Spector instructed voice actor Jay Anthony Franke to record his dialogue without any emotion but in a monotone voice, which is unusual for a voice acting role.
Once coded, the team's game systems did not work as intended. The early tests of the conversation system and user interface were flawed. The team also found augmentations and skills to be less interesting than they had seemed in the design document. In response, Harvey Smith substantially revised the augmentations and skills. Production milestones served as wake-up calls for the game's direction. A May 1998 milestone that called for a functional demo revealed that the size of the game's maps caused frame rate issues, which was one of the first signs that maps needed to be cut. A year later, the team reached a milestone for finished game systems, which led to better estimates for their future mission work and the reduction of the 500-page design document to 270 pages. Spector recalled Smith's mantra on this point: "less is more".
One of the team's biggest blind spots was the AI programming for NPCs. Spector wrote that they considered it in preproduction, but that they did not figure out how to handle it until "relatively late in development". This led to wasted time when the team had to discard their old AI code. The team built atop their game engine's shooter-based AI instead of writing new code that would allow characters to exhibit convincing emotions. As a result, NPC behavior was variable until the very end of development. Spector felt that the team's "sin" was their inconsistent display of a trustable "human AI".
Technology
The game was developed on systems including dual-processor Pentium Pro 200s and Athlon 800s with eight and nine gigabyte hard drives, some using SCSI. The team used "more than 100 video cards" throughout development. Deus Ex was built using Visual Studio, Lightwave, and Lotus Notes. They also made a custom dialogue editor, ConEdit. The team used UnrealEd atop the Unreal game engine for map design, which Spector wrote was "superior to anything else available". Their trust in UnrealScript led them to code "special-cases" for their immediate mission needs instead of more generalized multi-case code. Even as concerned team members expressed misgivings, the team only addressed this later in the project. To Spector, this was a lesson to always prefer "general solutions" over "special casing", such that the toolset works predictably.
They waited to license a game engine until after preproduction, expecting the benefits of licensing to be more time for the content and gameplay, which Spector reported to be the case. They chose the Unreal engine, as it did 80% of what they needed from an engine and was more economical than building from scratch. Their small programming team allowed for a larger design group. The programmers also found the engine accommodating, though it took about nine months to acclimate to the software. Spector felt that they would have understood the code better had they built it themselves, instead of "treating the engine as a black box" and coding conservatively. He acknowledged that this precipitated into the Direct3D issues in their final release, which slipped through their quality assurance testing. Spector also noted that the artificial intelligence, pathfinding, and sound propagation were designed for shooters and should have been rewritten from scratch instead of relying on the engine. He thought the licensed engine worked well enough that he expected to use the same for the game's sequel Deus Ex: Invisible War and Thief 3. He added that developers should not attempt to force their technology to perform in ways it was not intended, and should find a balance between perfection and pragmatism.
Music
The soundtrack of Deus Ex, composed by Alexander Brandon (primary contributor, including main theme), Dan Gardopée ("Naval Base" and "Vandenberg"), Michiel van den Bos ("UNATCO", "Lebedev's Airfield", "Airfield Action", "DuClare Chateau", plus minor contribution to some of Brandon's tracks), and Reeves Gabrels ("NYC Bar"), was praised by critics for complementing the gritty atmosphere predominant throughout the game with melodious and ambient music incorporated from a number of genres, including techno, jazz, and classical. The music sports a basic dynamic element, similar to the iMUSE system used in early 1990s LucasArts games; during play, the music will change to a different iteration of the currently playing song based on the player's actions, such as when the player starts a conversation, engages in combat, or transitions to the next level. All the music in the game is tracked - Gabrels' contribution, "NYC Bar", was converted to a module by Alexander Brandon.
Release
Deus Ex has been re-released in several iterations since its original publication and has also been the basis of several mods developed by its fan community.
The Deus Ex: Game of the Year Edition, which was released on May 8, 2001, contains the latest game updates and a software development kit, a separate soundtrack CD, and a page from a fictional newspaper featured prominently in Deus Ex titled The Midnight Sun, which recounts recent events in the game's world. However, later releases of said version do not include the soundtrack CD and contain a PDF version of the newspaper on the game's disc.
The Mac OS version of the game, released a month after the Windows version, was shipped with the same capabilities and can also be patched to enable multiplayer support. However, publisher Aspyr Media did not release any subsequent editions of the game or any additional patches. As such, the game is only supported in Mac OS 9 and the "Classic" environment in Mac OS X, neither of which are compatible with Intel-based Macs. The Windows version will run on Intel-based Macs using Crossover, Boot Camp, or other software to enable a compatible version of Windows to run on a Mac.
A PlayStation 2 port of the game, retitled Deus Ex: The Conspiracy outside of Europe, was released on March 26, 2002. Along with motion-captured character animations and pre-rendered introductory and ending cinematics that replaced the original versions, it features a simplified interface with optional auto-aim. There are many minor changes in level design, some to balance gameplay, but most to accommodate loading transition areas, due to the memory limitations of the PlayStation 2. The PlayStation 2 version was re-released in Europe on the PlayStation 3 as a PlayStation 2 Classic on May 16, 2012.
Loki Games worked on a Linux version of the game, but the company went out of business before releasing it. The OpenGL layer they wrote for the port, however, was sent out to Windows gamers through an online patch.
Though their quality assurance did not see major Direct3D issues, players noted "dramatic slowdowns" immediately following the launch, and the team did not understand the "black box" of the Unreal engine well enough to make it do exactly what they needed. Spector characterized Deus Ex reviews into two categories based on how they begin with either how "Warren Spector makes games all by himself" or that "Deus Ex couldn't possibly have been made by Ion Storm". He has said that the game won over 30 "best of" awards in 2001, and concluded that their final game was not perfect, but that they were much closer for having tried to "do things right or not at all".
Mods
Deus Ex was built on the Unreal Engine, which already had an active community of modders. In September 2000, Eidos Interactive and Ion Storm announced in a press release that they would be releasing the software development kit (SDK), which included all the tools used to create the original game. Several team members, as well as project director Warren Spector, stated that they were "really looking forward to seeing what [the community] does with our tools". The kit was released on September 22, 2000, and soon gathered community interest, followed by the release of tutorials, small mods, up to announcements of large mods and conversions. While Ion Storm did not hugely alter the engine's rendering and core functionality, they introduced role-playing elements.
In 2009, a fan-made mod called The Nameless Mod (TNM) was released by Off Topic Productions. The game's protagonist is a user of an Internet forum, with digital places represented as physical locations. The mod offers roughly the same amount of gameplay as Deus Ex and adds several new features to the game, with a more open world structure than Deus Ex and new weapons such as the player character's fists. The mod was developed over seven years and has thousands of lines of recorded dialogue and two different parallel story arcs. Upon its release, TNM earned a 9/10 overall from PC PowerPlay magazine. In Mod DB's 2009 Mod of the Year awards, The Nameless Mod won the Editor's Choice award for Best Singleplayer Mod.
In 2015, during the 15th anniversary of the game's release, Square Enix (who had acquired Eidos earlier) endorsed a free fan-created mod, Deus Ex: Revision, which was released through Steam. The mod, created by Caustic Creative, is a graphical overhaul of the original game, adding in support for newer versions of DirectX, upgraded textures adapted from previous mods, a remixed soundtrack, and more world-building aesthetics. It also alters aspects of gameplay, including new level design paths and in-game architecture. Another overhaul mod, GMDX, released its final version in mid-2017 with enhanced artificial intelligence, improved physics, and upgraded visual textures.
The Lay D Denton Project, a mod adding the ability to play as a female JC – a feature that had been planned for Deus Ex but ultimately not implemented – was released in 2021. This included the re-recording of all of JC's voice lines by voice actress Karen Rohan, the addition of 3D models for the character, and editing of all gendered references to JC including other characters' voice clips. The audio editing was the most difficult aspect, as any abnormalities would have been noticed easily; a few characters were too difficult to edit, and had to be recast for the mod.
Reception
Sales
According to Computer Gaming Worlds Stefan Janicki, Deus Ex had "sold well in North America" by early 2001. In the United States, it debuted at #6 on PC Data's sales chart for the week ending June 24, at an average retail price of $40. It fell to eighth place in its second week but rose again to position 6 in its third. It proceeded to place in the top 10 rankings for August 6–12 and the week ending September 2 and to secure 10th place overall for the months of July and August. Deus Ex achieved sales of 138,840 copies and revenues of $5 million in the United States by the end of 2000, according to PC Data. The firm tracked another 91,013 copies sold in the country during 2001.
The game was a larger hit in Europe; Janicki called it a "blockbuster" for the region, which broke a trend of weak sales for 3D games. He wrote, "[I]n Europe—particularly in England—the action/RPG dominated the charts all summer, despite competition from heavyweights like Diablo II and The Sims." In the German-speaking market, PC Player reported sales over 70,000 units for Deus Ex by early 2001. It debuted at #3 in the region for July 2000 and held the position in August, before dropping to #10, #12 and #27 over the following three months. In the United Kingdom, Deus Ex reached #1 on the sales charts during August and spent three months in the top 10. It received a "Silver" award from the Entertainment and Leisure Software Publishers Association (ELSPA) in February 2002, indicating lifetime sales of at least 100,000 units in the United Kingdom. The ELSPA later raised it to "Gold" status, for 200,000 sales.
In April 2009, Square-Enix revealed that Deus Ex had surpassed 1 million sales globally, but was outsold by Deus Ex: Invisible War.
Critical response
Deus Ex received critical acclaim, attaining a score of 90 out of 100 from 28 critics on Metacritic. Thierry Nguyen from Computer Gaming World said that the game "delivers moments of brilliance, idiocy, ingenuity, and frustration". Computer Games Magazine praised the title for its deep gameplay and its use of multiple solutions to situations in the game. Similarly, Edge highlighted the game's freedom of choice, saying that Deus Ex "never tells you what to do. Goals are set, but alter according to your decisions." Eurogamers Rob Fahey lauded the game, writing, "Moody and atmospheric, compelling and addictive, this is first person gaming in grown-up form, and it truly is magnificent." Jeff Lundrigan reviewed the PC version of the game for Next Generation, rating it five stars out of five, and stated that "This is hands-down one of the best PC games ever made. Stop reading and go get yours now."
Former GameSpot reviewer Greg Kasavin, though awarding the game a score of 8.2 of 10, was disappointed by the security and lockpicking mechanics. "Such instances are essentially noninteractive", he wrote. "You simply stand there and spend a particular quantity of electronic picks or modules until the door opens or the security goes down." Kasavin made similar complaints about the hacking interface, noting that "Even with basic hacking skills, you'll still be able to bypass the encryption and password protection ... by pressing the 'hack' button and waiting a few seconds".
The game's graphics and voice acting were also met with muted enthusiasm. Kasavin complained of Deus Exs relatively sub-par graphics, blaming them on the game's "incessantly dark industrial environments". GamePro reviewer Chris Patterson took the time to note that despite being "solid acoustically", Deus Ex had moments of weakness. He poked fun at JC's "Joe Friday, 'just the facts', deadpan", and the "truly cheesy accents" of minor characters in Hong Kong and New York City. IGN called the graphics "blocky", adding that "the animation is stiff, and the dithering is just plain awful in some spots", referring to the limited capabilities of the Unreal Engine used to design the game. The website, later on, stated that "overall Deus Ex certainly looks better than your average game".
Reviewers and players also complained about the size of Deus Exs save files. An Adrenaline Vault reviewer noted that "Playing through the entire adventure, [he] accumulated over 250 MB of save game data, with the average file coming in at over 15 MB."
The game developed a strong cult following, leading to a core modding and playing community that remained active over 15 years after its release. In an interview with IGN in June 2015, game director Warren Spector said he never expected Deus Ex to sell many copies, but he did expect it to become a cult classic among a smaller, active community, and he continues to receive fan mail from players to date regarding their experiences and thoughts about Deus Ex.
Awards and accolades
Deus Ex received over 30 "best of" awards in 2001, from outlets such as IGN, GameSpy, PC Gamer, Computer Gaming World, and The Adrenaline Vault. It won "Excellence in Game Design" and "Game Innovation Spotlight" at the 2001 Game Developers Choice Awards, and it was nominated for "Game of the Year". At the Interactive Achievement Awards, it won in the "Computer Innovation" and "Computer Action/Adventure" categories and received nominations for "Sound Design", "PC Role-Playing", and "Game of the Year" in both the PC and overall categories. The British Academy of Film and Television Arts named it "PC Game of the Year". The game also collected several "Best Story" accolades, including first prize in Gamasutra's 2006 "Quantum Leap" awards for storytelling in a video game.
Deus Ex has appeared in several lists of the greatest games. It was included in IGN "100 Greatest Games of All Time" (#40, #21 and #34 in 2003, 2005 and 2007, respectively), "Top 25 Modern PC Games" (4th place in 2010) and "Top 25 PC Games of All Time" (#20 and #21 in 2007 and 2009 respectively) lists. GameSpy featured the game in its "Top 50 Games of All Time" (18th place in 2001) and "25 Most Memorable Games of the Past 5 Years" (15th place in 2004) lists, and in the site's "Hall of Fame". PC Gamer placed Deus Ex on its "Top 100 PC Games of All Time" (#2, #2, #1 by staff and #4 by readers in 2007, 2008, 2010 and 2010 respectively) and "50 Best Games of All Time" (#10 and #27 in 2001 and 2005) lists, and it was awarded 1st place in PC Zones "101 Best PC Games Ever" feature. It was also included in Yahoo! UK Video Games' "100 Greatest Computer Games of All Time" (28th place) list, and in Edges "The 100 Best Videogames" (29th place in 2007) and "100 Best Games to Play Today" (57th place in 2009) lists. Deus Ex was named the second-best game of the 2000s by Gamasutra. In 2012, Time named it one of the 100 greatest video games of all time, and G4tv named it the 53rd best game of all time for its "complex and well-crafted story that was really the start of players making choices that genuinely affect the outcome". 1UP.com listed it as one of the most important games of all time, calling its influence "too massive to properly gauge". In 2019, the Guardian named it the 29th best game of the 21st century, describing it as a "cultural event".
Legacy
Sequels
A sequel, Deus Ex: Invisible War, was released in the United States on December 2, 2003, and in Europe in early 2004 for Windows and Xbox. A second sequel, titled Deus Ex: Clan Wars, was initially conceived as a multiplayer-focused third game for the series. After the commercial performance and public reception of Deus Ex: Invisible War failed to meet expectations, the decision was made to set the game in a separate universe, and Deus Ex: Clan Wars was eventually published under the title Project: Snowblind.
On March 29, 2007, Valve announced Deus Ex and its sequel would be available for purchase from their Steam service. Among the games announced are several other Eidos franchise titles, including Thief: Deadly Shadows and Tomb Raider.
Eidos Montréal produced a prequel to Deus Ex called Deus Ex: Human Revolution. This was confirmed on November 26, 2007, when Eidos Montréal posted a teaser trailer for the title on their website. The game was released on August 23, 2011, for the PC, PlayStation 3, and Xbox 360 platforms and received critical acclaim.
On April 7, 2015, Eidos announced a sequel to Deus Ex: Human Revolution and second prequel to Deus Ex titled Deus Ex: Mankind Divided. It was released on August 23, 2016.
Adaptation
A film adaptation based on the game was initially announced in May 2002 by Columbia Pictures. The film was being produced by Laura Ziskin, along with Greg Pruss attached with writing the screenplay. Peter Schlessel, president of the production for Columbia Pictures, and Paul Baldwin, president of marketing for Eidos Interactive, stated that they were confident in that the adaptation would be a successful development for both the studios and the franchise. In March 2003, during an interview with Greg Pruss, he informed IGN that the character of JC Denton would be "a little bit filthier than he was in the game". He further stated that the script was shaping up to be darker in tone than the original game. Although a release date was scheduled for 2006, the film did not get past the scripting stage.
In 2012, CBS films revived the project, buying the rights and commissioning a film inspired by the Deux Ex series; its direct inspiration was the 2011 game Human Revolution. C. Robert Cargill and Scott Derrickson were to write the screenplay, and Derrickson was to direct the film.
References
Notes
Footnotes
Sources
External links
Official page on Eidos site
2000 video games
Fiction set in 2052
Action role-playing video games
Cancelled Linux games
Cyberpunk video games
Cyberpunk
Postcyberpunk
Nanopunk
Deus Ex
Dystopian video games
Eidos Interactive games
Existentialist works
First-person shooters
Interactive Achievement Award winners
Ion Storm games
Classic Mac OS games
Multiplayer and single-player video games
Multiplayer online games
PlayStation 2 games
Stealth video games
Works about globalism
Unreal Engine games
Video games scored by Alexander Brandon
Video games scored by Dan Gardopée
Video games scored by Michiel van den Bos
Video games developed in the United States
Video games about viral outbreaks
Video games set in California
Video games set in Hong Kong
Video games set in Nevada
Video games set in New York City
Video games set in Paris
Video games set in the 2050s
Video games with alternate endings
Windows games
Works about conspiracy theories
Motion capture in video games
Immersive sims |
8492 | https://en.wikipedia.org/wiki/Discrete%20mathematics | Discrete mathematics | Discrete mathematics is the study of mathematical structures that can be considered "discrete" (in a way analogous to discrete variables, having a one-to-one correspondence with the set of natural numbers) rather than "continuous" (analogously to continuous functions). Objects studied in discrete mathematics include integers, graphs, and statements in logic. By contrast, discrete mathematics excludes topics in "continuous mathematics" such as real numbers, calculus or Euclidean geometry. Discrete objects can often be enumerated by integers; more formally, discrete mathematics has been characterized as the branch of mathematics dealing with countable sets (finite sets or sets with the same cardinality as the natural numbers). However, there is no exact definition of the term "discrete mathematics".
The set of objects studied in discrete mathematics can be finite or infinite. The term finite mathematics is sometimes applied to parts of the field of discrete mathematics that deals with finite sets, particularly those areas relevant to business.
Research in discrete mathematics increased in the latter half of the twentieth century partly due to the development of digital computers which operate in "discrete" steps and store data in "discrete" bits. Concepts and notations from discrete mathematics are useful in studying and describing objects and problems in branches of computer science, such as computer algorithms, programming languages, cryptography, automated theorem proving, and software development. Conversely, computer implementations are significant in applying ideas from discrete mathematics to real-world problems, such as in operations research.
Although the main objects of study in discrete mathematics are discrete objects, analytic methods from "continuous" mathematics are often employed as well.
In university curricula, "Discrete Mathematics" appeared in the 1980s, initially as a computer science support course; its contents were somewhat haphazard at the time. The curriculum has thereafter developed in conjunction with efforts by ACM and MAA into a course that is basically intended to develop mathematical maturity in first-year students; therefore, it is nowadays a prerequisite for mathematics majors in some universities as well. Some high-school-level discrete mathematics textbooks have appeared as well. At this level, discrete mathematics is sometimes seen as a preparatory course, not unlike precalculus in this respect.
The Fulkerson Prize is awarded for outstanding papers in discrete mathematics.
Grand challenges, past and present
The history of discrete mathematics has involved a number of challenging problems which have focused attention within areas of the field. In graph theory, much research was motivated by attempts to prove the four color theorem, first stated in 1852, but not proved until 1976 (by Kenneth Appel and Wolfgang Haken, using substantial computer assistance).
In logic, the second problem on David Hilbert's list of open problems presented in 1900 was to prove that the axioms of arithmetic are consistent. Gödel's second incompleteness theorem, proved in 1931, showed that this was not possible – at least not within arithmetic itself. Hilbert's tenth problem was to determine whether a given polynomial Diophantine equation with integer coefficients has an integer solution. In 1970, Yuri Matiyasevich proved that this could not be done.
The need to break German codes in World War II led to advances in cryptography and theoretical computer science, with the first programmable digital electronic computer being developed at England's Bletchley Park with the guidance of Alan Turing and his seminal work, On Computable Numbers. At the same time, military requirements motivated advances in operations research. The Cold War meant that cryptography remained important, with fundamental advances such as public-key cryptography being developed in the following decades. Operations research remained important as a tool in business and project management, with the critical path method being developed in the 1950s. The telecommunication industry has also motivated advances in discrete mathematics, particularly in graph theory and information theory. Formal verification of statements in logic has been necessary for software development of safety-critical systems, and advances in automated theorem proving have been driven by this need.
Computational geometry has been an important part of the computer graphics incorporated into modern video games and computer-aided design tools.
Several fields of discrete mathematics, particularly theoretical computer science, graph theory, and combinatorics, are important in addressing the challenging bioinformatics problems associated with understanding the tree of life.
Currently, one of the most famous open problems in theoretical computer science is the P = NP problem, which involves the relationship between the complexity classes P and NP. The Clay Mathematics Institute has offered a $1 million USD prize for the first correct proof, along with prizes for six other mathematical problems.
Topics in discrete mathematics
Theoretical computer science
Theoretical computer science includes areas of discrete mathematics relevant to computing. It draws heavily on graph theory and mathematical logic. Included within theoretical computer science is the study of algorithms and data structures. Computability studies what can be computed in principle, and has close ties to logic, while complexity studies the time, space, and other resources taken by computations. Automata theory and formal language theory are closely related to computability. Petri nets and process algebras are used to model computer systems, and methods from discrete mathematics are used in analyzing VLSI electronic circuits. Computational geometry applies algorithms to geometrical problems, while computer image analysis applies them to representations of images. Theoretical computer science also includes the study of various continuous computational topics.
Information theory
Information theory involves the quantification of information. Closely related is coding theory which is used to design efficient and reliable data transmission and storage methods. Information theory also includes continuous topics such as: analog signals, analog coding, analog encryption.
Logic
Logic is the study of the principles of valid reasoning and inference, as well as of consistency, soundness, and completeness. For example, in most systems of logic (but not in intuitionistic logic) Peirce's law (((P→Q)→P)→P) is a theorem. For classical logic, it can be easily verified with a truth table. The study of mathematical proof is particularly important in logic, and has applications to automated theorem proving and formal verification of software.
Logical formulas are discrete structures, as are proofs, which form finite trees or, more generally, directed acyclic graph structures (with each inference step combining one or more premise branches to give a single conclusion). The truth values of logical formulas usually form a finite set, generally restricted to two values: true and false, but logic can also be continuous-valued, e.g., fuzzy logic. Concepts such as infinite proof trees or infinite derivation trees have also been studied, e.g. infinitary logic.
Set theory
Set theory is the branch of mathematics that studies sets, which are collections of objects, such as {blue, white, red} or the (infinite) set of all prime numbers. Partially ordered sets and sets with other relations have applications in several areas.
In discrete mathematics, countable sets (including finite sets) are the main focus. The beginning of set theory as a branch of mathematics is usually marked by Georg Cantor's work distinguishing between different kinds of infinite set, motivated by the study of trigonometric series, and further development of the theory of infinite sets is outside the scope of discrete mathematics. Indeed, contemporary work in descriptive set theory makes extensive use of traditional continuous mathematics.
Combinatorics
Combinatorics studies the way in which discrete structures can be combined or arranged.
Enumerative combinatorics concentrates on counting the number of certain combinatorial objects - e.g. the twelvefold way provides a unified framework for counting permutations, combinations and partitions.
Analytic combinatorics concerns the enumeration (i.e., determining the number) of combinatorial structures using tools from complex analysis and probability theory. In contrast with enumerative combinatorics which uses explicit combinatorial formulae and generating functions to describe the results, analytic combinatorics aims at obtaining asymptotic formulae.
Design theory is a study of combinatorial designs, which are collections of subsets with certain intersection properties.
Partition theory studies various enumeration and asymptotic problems related to integer partitions, and is closely related to q-series, special functions and orthogonal polynomials. Originally a part of number theory and analysis, partition theory is now considered a part of combinatorics or an independent field.
Order theory is the study of partially ordered sets, both finite and infinite.
Graph theory
Graph theory, the study of graphs and networks, is often considered part of combinatorics, but has grown large enough and distinct enough, with its own kind of problems, to be regarded as a subject in its own right. Graphs are one of the prime objects of study in discrete mathematics. They are among the most ubiquitous models of both natural and human-made structures. They can model many types of relations and process dynamics in physical, biological and social systems. In computer science, they can represent networks of communication, data organization, computational devices, the flow of computation, etc. In mathematics, they are useful in geometry and certain parts of topology, e.g. knot theory. Algebraic graph theory has close links with group theory. There are also continuous graphs; however, for the most part, research in graph theory falls within the domain of discrete mathematics.
Probability
Discrete probability theory deals with events that occur in countable sample spaces. For example, count observations such as the numbers of birds in flocks comprise only natural number values {0, 1, 2, ...}. On the other hand, continuous observations such as the weights of birds comprise real number values and would typically be modeled by a continuous probability distribution such as the normal. Discrete probability distributions can be used to approximate continuous ones and vice versa. For highly constrained situations such as throwing dice or experiments with decks of cards, calculating the probability of events is basically enumerative combinatorics.
Number theory
Number theory is concerned with the properties of numbers in general, particularly integers. It has applications to cryptography and cryptanalysis, particularly with regard to modular arithmetic, diophantine equations, linear and quadratic congruences, prime numbers and primality testing. Other discrete aspects of number theory include geometry of numbers. In analytic number theory, techniques from continuous mathematics are also used. Topics that go beyond discrete objects include transcendental numbers, diophantine approximation, p-adic analysis and function fields.
Algebraic structures
Algebraic structures occur as both discrete examples and continuous examples. Discrete algebras include: boolean algebra used in logic gates and programming; relational algebra used in databases; discrete and finite versions of groups, rings and fields are important in algebraic coding theory; discrete semigroups and monoids appear in the theory of formal languages.
Calculus of finite differences, discrete calculus or discrete analysis
A function defined on an interval of the integers is usually called a sequence. A sequence could be a finite sequence from a data source or an infinite sequence from a discrete dynamical system. Such a discrete function could be defined explicitly by a list (if its domain is finite), or by a formula for its general term, or it could be given implicitly by a recurrence relation or difference equation. Difference equations are similar to differential equations, but replace differentiation by taking the difference between adjacent terms; they can be used to approximate differential equations or (more often) studied in their own right. Many questions and methods concerning differential equations have counterparts for difference equations. For instance, where there are integral transforms in harmonic analysis for studying continuous functions or analogue signals, there are discrete transforms for discrete functions or digital signals. As well as the discrete metric there are more general discrete or finite metric spaces and finite topological spaces.
Geometry
Discrete geometry and combinatorial geometry are about combinatorial properties of discrete collections of geometrical objects. A long-standing topic in discrete geometry is tiling of the plane. Computational geometry applies algorithms to geometrical problems.
Topology
Although topology is the field of mathematics that formalizes and generalizes the intuitive notion of "continuous deformation" of objects, it gives rise to many discrete topics; this can be attributed in part to the focus on topological invariants, which themselves usually take discrete values.
See combinatorial topology, topological graph theory, topological combinatorics, computational topology, discrete topological space, finite topological space, topology (chemistry).
Operations research
Operations research provides techniques for solving practical problems in engineering, business, and other fields — problems such as allocating resources to maximize profit, and scheduling project activities to minimize risk. Operations research techniques include linear programming and other areas of optimization, queuing theory, scheduling theory, and network theory. Operations research also includes continuous topics such as continuous-time Markov process, continuous-time martingales, process optimization, and continuous and hybrid control theory.
Game theory, decision theory, utility theory, social choice theory
Decision theory is concerned with identifying the values, uncertainties and other issues relevant in a given decision, its rationality, and the resulting optimal decision.
Utility theory is about measures of the relative economic satisfaction from, or desirability of, consumption of various goods and services.
Social choice theory is about voting. A more puzzle-based approach to voting is ballot theory.
Game theory deals with situations where success depends on the choices of others, which makes choosing the best course of action more complex. There are even continuous games, see differential game. Topics include auction theory and fair division.
Discretization
Discretization concerns the process of transferring continuous models and equations into discrete counterparts, often for the purposes of making calculations easier by using approximations. Numerical analysis provides an important example.
Discrete analogues of continuous mathematics
There are many concepts in continuous mathematics which have discrete versions, such as discrete calculus, discrete probability distributions, discrete Fourier transforms, discrete geometry, discrete logarithms, discrete differential geometry, discrete exterior calculus, discrete Morse theory, difference equations, discrete dynamical systems, and discrete vector measures.
In applied mathematics, discrete modelling is the discrete analogue of continuous modelling. In discrete modelling, discrete formulae are fit to data. A common method in this form of modelling is to use recurrence relation.
In algebraic geometry, the concept of a curve can be extended to discrete geometries by taking the spectra of polynomial rings over finite fields to be models of the affine spaces over that field, and letting subvarieties or spectra of other rings provide the curves that lie in that space. Although the space in which the curves appear has a finite number of points, the curves are not so much sets of points as analogues of curves in continuous settings. For example, every point of the form for a field can be studied either as , a point, or as the spectrum of the local ring at (x-c), a point together with a neighborhood around it. Algebraic varieties also have a well-defined notion of tangent space called the Zariski tangent space, making many features of calculus applicable even in finite settings.
Hybrid discrete and continuous mathematics
The time scale calculus is a unification of the theory of difference equations with that of differential equations, which has applications to fields requiring simultaneous modelling of discrete and continuous data. Another way of modeling such a situation is the notion of hybrid dynamical systems.
See also
Outline of discrete mathematics
Cyberchase, a show that teaches Discrete Mathematics to children
References
Further reading
Ronald Graham, Donald E. Knuth, Oren Patashnik, Concrete Mathematics.
External links
Discrete mathematics at the utk.edu Mathematics Archives, providing links to syllabi, tutorials, programs, etc.
Iowa Central: Electrical Technologies Program Discrete mathematics for Electrical engineering. |
8536 | https://en.wikipedia.org/wiki/Differential%20cryptanalysis | Differential cryptanalysis | Differential cryptanalysis is a general form of cryptanalysis applicable primarily to block ciphers, but also to stream ciphers and cryptographic hash functions. In the broadest sense, it is the study of how differences in information input can affect the resultant difference at the output. In the case of a block cipher, it refers to a set of techniques for tracing differences through the network of transformation, discovering where the cipher exhibits non-random behavior, and exploiting such properties to recover the secret key (cryptography key).
History
The discovery of differential cryptanalysis is generally attributed to Eli Biham and Adi Shamir in the late 1980s, who published a number of attacks against various block ciphers and hash functions, including a theoretical weakness in the Data Encryption Standard (DES). It was noted by Biham and Shamir that DES was surprisingly resistant to differential cryptanalysis but small modifications to the algorithm would make it much more susceptible.
In 1994, a member of the original IBM DES team, Don Coppersmith, published a paper stating that differential cryptanalysis was known to IBM as early as 1974, and that defending against differential cryptanalysis had been a design goal. According to author Steven Levy, IBM had discovered differential cryptanalysis on its own, and the NSA was apparently well aware of the technique. IBM kept some secrets, as Coppersmith explains: "After discussions with NSA, it was decided that disclosure of the design considerations would reveal the technique of differential cryptanalysis, a powerful technique that could be used against many ciphers. This in turn would weaken the competitive advantage the United States enjoyed over other countries in the field of cryptography." Within IBM, differential cryptanalysis was known as the "T-attack" or "Tickle attack".
While DES was designed with resistance to differential cryptanalysis in mind, other contemporary ciphers proved to be vulnerable. An early target for the attack was the FEAL block cipher. The original proposed version with four rounds (FEAL-4) can be broken using only eight chosen plaintexts, and even a 31-round version of FEAL is susceptible to the attack. In contrast, the scheme can successfully cryptanalyze DES with an effort on the order 247 chosen plaintexts.
Attack mechanics
Differential cryptanalysis is usually a chosen plaintext attack, meaning that the attacker must be able to obtain ciphertexts for some set of plaintexts of their choosing. There are, however, extensions that would allow a known plaintext or even a ciphertext-only attack. The basic method uses pairs of plaintext related by a constant difference. Difference can be defined in several ways, but the eXclusive OR (XOR) operation is usual. The attacker then computes the differences of the corresponding ciphertexts, hoping to detect statistical patterns in their distribution. The resulting pair of differences is called a differential. Their statistical properties depend upon the nature of the S-boxes used for encryption, so the attacker analyses differentials where
(and ⊕ denotes exclusive or) for each such S-box S. In the basic attack, one particular ciphertext difference is expected to be especially frequent. In this way, the cipher can be distinguished from random. More sophisticated variations allow the key to be recovered faster than exhaustive search.
In the most basic form of key recovery through differential cryptanalysis, an attacker requests the ciphertexts for a large number of plaintext pairs, then assumes that the differential holds for at least r − 1 rounds, where r is the total number of rounds. The attacker then deduces which round keys (for the final round) are possible, assuming the difference between the blocks before the final round is fixed. When round keys are short, this can be achieved by simply exhaustively decrypting the ciphertext pairs one round with each possible round key. When one round key has been deemed a potential round key considerably more often than any other key, it is assumed to be the correct round key.
For any particular cipher, the input difference must be carefully selected for the attack to be successful. An analysis of the algorithm's internals is undertaken; the standard method is to trace a path of highly probable differences through the various stages of encryption, termed a differential characteristic.
Since differential cryptanalysis became public knowledge, it has become a basic concern of cipher designers. New designs are expected to be accompanied by evidence that the algorithm is resistant to this attack, and many, including the Advanced Encryption Standard, have been proven secure against the attack.
Attack in detail
The attack relies primarily on the fact that a given input/output difference pattern only occurs for certain values of inputs. Usually the attack is applied in essence to the non-linear components as if they were a solid component (usually they are in fact look-up tables or S-boxes). Observing the desired output difference (between two chosen or known plaintext inputs) suggests possible key values.
For example, if a differential of 1 => 1 (implying a difference in the least significant bit (LSB) of the input leads to an output difference in the LSB) occurs with probability of 4/256 (possible with the non-linear function in the AES cipher for instance) then for only 4 values (or 2 pairs) of inputs is that differential possible. Suppose we have a non-linear function where the key is XOR'ed before evaluation and the values that allow the differential are {2,3} and {4,5}. If the attacker sends in the values of {6, 7} and observes the correct output difference it means the key is either 6 ⊕ K = 2, or 6 ⊕ K = 4, meaning the key K is either 2 or 4.
In essence, for an n-bit non-linear function one would ideally seek as close to 2−(n − 1) as possible to achieve differential uniformity. When this happens, the differential attack requires as much work to determine the key as simply brute forcing the key.
The AES non-linear function has a maximum differential probability of 4/256 (most entries however are either 0 or 2). Meaning that in theory one could determine the key with half as much work as brute force, however, the high branch of AES prevents any high probability trails from existing over multiple rounds. In fact, the AES cipher would be just as immune to differential and linear attacks with a much weaker non-linear function. The incredibly high branch (active S-box count) of 25 over 4R means that over 8 rounds no attack involves fewer than 50 non-linear transforms, meaning that the probability of success does not exceed Pr[attack] ≤ Pr[best attack on S-box]50. For example, with the current S-box AES emits no fixed differential with a probability higher than (4/256)50 or 2−300 which is far lower than the required threshold of 2−128 for a 128-bit block cipher. This would have allowed room for a more efficient S-box, even if it is 16-uniform the probability of attack would have still been 2−200.
There exist no bijections for even sized inputs/outputs with 2-uniformity. They exist in odd fields (such as GF(27)) using either cubing or inversion (there are other exponents that can be used as well). For instance S(x) = x3 in any odd binary field is immune to differential and linear cryptanalysis. This is in part why the MISTY designs use 7- and 9-bit functions in the 16-bit non-linear function. What these functions gain in immunity to differential and linear attacks they lose to algebraic attacks. That is, they are possible to describe and solve via a SAT solver. This is in part why AES (for instance) has an affine mapping after the inversion.
Specialized types
Higher-order differential cryptanalysis
Truncated differential cryptanalysis
Impossible differential cryptanalysis
Boomerang attack
See also
Cryptography
Integral cryptanalysis
Linear cryptanalysis
Differential equations of addition
References
General
Eli Biham, Adi Shamir, Differential Cryptanalysis of the Data Encryption Standard, Springer Verlag, 1993. , .
Biham, E. and A. Shamir. (1990). Differential Cryptanalysis of DES-like Cryptosystems. Advances in Cryptology — CRYPTO '90. Springer-Verlag. 2–21.
Eli Biham, Adi Shamir,"Differential Cryptanalysis of the Full 16-Round DES," CS 708, Proceedings of CRYPTO '92, Volume 740 of Lecture Notes in Computer Science, December 1991. (Postscript)
External links
A tutorial on differential (and linear) cryptanalysis
Helger Lipmaa's links on differential cryptanalysis
Cryptographic attacks |
8674 | https://en.wikipedia.org/wiki/Digital%20enhanced%20cordless%20telecommunications | Digital enhanced cordless telecommunications | Digital enhanced cordless telecommunications (Digital European cordless telecommunications), usually known by the acronym DECT, is a standard primarily used for creating cordless telephone systems. It originated in Europe, where it is the common standard, replacing earlier cordless phone standards, such as 900 MHz CT1 and CT2.
Beyond Europe, it has been adopted by Australia and most countries in Asia and South America. North American adoption was delayed by United States radio-frequency regulations. This forced development of a variation of DECT called DECT 6.0, using a slightly different frequency range, which makes these units incompatible with systems intended for use in other areas, even from the same manufacturer. DECT has almost completely replaced other standards in most countries where it is used, with the exception of North America.
DECT was originally intended for fast roaming between networked base stations, and the first DECT product was Net3 wireless LAN. However, its most popular application is single-cell cordless phones connected to traditional analog telephone, primarily in home and small-office systems, though gateways with multi-cell DECT and/or DECT repeaters are also available in many private branch exchange (PBX) systems for medium and large businesses, produced by Panasonic, Mitel, Gigaset, Cisco, Grandstream, Snom, Spectralink, and RTX Telecom. DECT can also be used for purposes other than cordless phones, such as baby monitors and industrial sensors. The ULE Alliance's DECT ULE and its "HAN FUN" protocol are variants tailored for home security, automation, and the internet of things (IoT).
The DECT standard includes the generic access profile (GAP), a common interoperability profile for simple telephone capabilities, which most manufacturers implement. GAP-conformance enables DECT handsets and bases from different manufacturers to interoperate at the most basic level of functionality, that of making and receiving calls. Japan uses its own DECT variant, J-DECT, which is supported by the DECT forum.
The New Generation DECT (NG-DECT) standard, marketed as CAT-iq by the DECT Forum, provides a common set of advanced capabilities for handsets and base stations. CAT-iq allows interchangeability across IP-DECT base stations and handsets from different manufacturers, while maintaining backward compatibility with GAP equipment. It also requires mandatory support for wideband audio.
DECT-2020 New Radio is a 5G data transmission protocol which meets ITU-R IMT-2020 requirements for ultra-reliable low-latency and massive machine-type communications, and can co-exist with earlier DECT devices.
Standards history
The DECT standard was developed by ETSI in several phases, the first of which took place between 1988 and 1992 when the first round of standards were published. These were the ETS 300-175 series in nine parts defining the air interface, and ETS 300-176 defining how the units should be type approved. A technical report, ETR-178, was also published to explain the standard. Subsequent standards were developed and published by ETSI to cover interoperability profiles and standards for testing.
Named Digital European Cordless Telephone at its launch by CEPT in November 1987; its name was soon changed to Digital European Cordless Telecommunications, following a suggestion by Enrico Tosato of Italy, to reflect its broader range of application including data services. In 1995, due to its more global usage, the name was changed from European to Enhanced. DECT is recognized by the ITU as fulfilling the IMT-2000 requirements and thus qualifies as a 3G system. Within the IMT-2000 group of technologies, DECT is referred to as IMT-2000 Frequency Time (IMT-FT).
DECT was developed by ETSI but has since been adopted by many countries all over the World. The original DECT frequency band (1880–1900 MHz) is used in all countries in Europe. Outside Europe, it is used in most of Asia, Australia and South America. In the United States, the Federal Communications Commission in 2005 changed channelization and licensing costs in a nearby band (1920–1930 MHz, or 1.9 GHz), known as Unlicensed Personal Communications Services (UPCS), allowing DECT devices to be sold in the U.S. with only minimal changes. These channels are reserved exclusively for voice communication applications and therefore are less likely to experience interference from other wireless devices such as baby monitors and wireless networks.
The New Generation DECT (NG-DECT) standard was first published in 2007; it was developed by ETSI with guidance from the Home Gateway Initiative through the DECT Forum to support IP-DECT functions in home gateway/IP-PBX equipment. The ETSI TS 102 527 series comes in five parts and covers wideband audio and mandatory interoperability features between handsets and base stations. They were preceded by an explanatory technical report, ETSI TR 102 570. The DECT Forum maintains the CAT-iq trademark and certification program; CAT-iq wideband voice profile 1.0 and interoperability profiles 2.0/2.1 are based on the relevant parts of ETSI TS 102 527.
The DECT Ultra Low Energy (DECT ULE) standard was announced in January 2011 and the first commercial products were launched later that year by Dialog Semiconductor. The standard was created to enable home automation, security, healthcare and energy monitoring applications that are battery powered. Like DECT, DECT ULE standard uses the 1.9 GHz band, and so suffers less interference than Zigbee, Bluetooth, or Wi-Fi from microwave ovens, which all operate in the unlicensed 2.4 GHz ISM band. DECT ULE uses a simple star network topology, so many devices in the home are connected to a single control unit.
A new low-complexity audio codec, LC3plus, has been added as an option to the 2019 revision of the DECT standard. This codec is designed for high-quality voice and music applications, and supports scalable narrowband, wideband, super wideband, and fullband coding, with sample rates of 8, 16, 24, 32 and 48 kHz and audio bandwidth of up to 20 kHz.
DECT-2020 New Radio protocol was published in July 2020; it defines a new physical interface based on Cyclic Prefix Orthogonal Frequency-Division Multiplexing (CP-OFDM) capable of up to 1.2Gbit/s transfer rate with QAM-1024 modulation. The updated standard supports multi-antenna MIMO and beamforming, FEC channel coding, and hybrid automatic repeat request. There are 17 radio channel frequencies in the range from 450 MHz up to 5 875 MHz, and channel bandwidths of 1728, 3456, or 6912 kHz. Direct communication between end devices is possible with a mesh network topology.
In October 2021, DECT-2020 NR was approved for the IMT-2020 standard, for use in Massive Machine Type Communications (MMTC) industry automation, Ultra-Reliable Low-Latency Communications (URLLC), and professional wireless audio applications with point-to-point or multicast communications; the proposal was fast-tracked by ITU-R following real-world evaluations.
OFDMA and SC-FDMA modulations were also considered by the ESTI DECT committee.
OpenD is an open-source framework designed to provide a complete software implementation of DECT ULE protocols on reference hardware from Dialog Semiconductor and DSP Group; the project is maintained by the DECT forum.
Application
The DECT standard originally envisaged three major areas of application:
Domestic cordless telephony, using a single base station to connect one or more handsets to the public telecommunications network.
Enterprise premises cordless PABXs and wireless LANs, using many base stations for coverage. Calls continue as users move between different coverage cells, through a mechanism called handover. Calls can be both within the system and to the public telecommunications network.
Public access, using large numbers of base stations to provide high capacity building or urban area coverage as part of a public telecommunications network.
Of these, the domestic application (cordless home telephones) has been extremely successful. The enterprise PABX market had some success, and all the major PABX vendors have offered DECT access options. The public access application did not succeed, since public cellular networks rapidly out-competed DECT by coupling their ubiquitous coverage with large increases in capacity and continuously falling costs. There has been only one major installation of DECT for public access: in early 1998 Telecom Italia launched a wide-area DECT network known as "Fido" after much regulatory delay, covering major cities in Italy. The service was promoted for only a few months and, having peaked at 142,000 subscribers, was shut down in 2001.
DECT has been used for wireless local loop as a substitute for copper pairs in the "last mile" in countries such as India and South Africa. By using directional antennas and sacrificing some traffic capacity, cell coverage could extend to over . One example is the corDECT standard.
The first data application for DECT was Net3 wireless LAN system by Olivetti, launched in 1993 and discontinued in 1995. A precursor to Wi-Fi, Net3 was a micro-cellular data-only network with fast roaming between base stations and 520 kbit/s transmission rates.
Data applications such as electronic cash terminals, traffic lights, and remote door openers also exist, but have been eclipsed by Wi-Fi, 3G and 4G which compete with DECT for both voice and data.
DECT 6.0
DECT 6.0 is a North American marketing term for DECT devices manufactured for the United States and Canada operating at 1.9 GHz. The "6.0" does not equate to a spectrum band; it was decided the term DECT 1.9 might have confused customers who equate larger numbers (such as the 2.4 and 5.8 in existing 2.4 GHz and 5.8 GHz cordless telephones) with later products. The term was coined by Rick Krupka, marketing director at Siemens and the DECT USA Working Group / Siemens ICM.
In North America, DECT suffers from deficiencies in comparison to DECT elsewhere, since the UPCS band (1920–1930 MHz) is not free from heavy interference. Bandwidth is half as wide as that used in Europe (1880–1900 MHz), the 4 mW average transmission power reduces range compared to the 10 mW permitted in Europe, and the commonplace lack of GAP compatibility among US vendors binds customers to a single vendor.
Before 1.9 GHz band was approved by the FCC in 2005, DECT could only operate in unlicensed 2.4 GHz and 900 MHz Region 2 ISM bands; some users of Uniden WDECT 2.4 GHz phones reported interoperability issues with Wi-Fi equipment.
North-American products may not be used in Europe, Pakistan, Sri Lanka, and Africa, as they cause and suffer from interference with the local cellular networks. Use of such products is prohibited by European Telecommunications Authorities, PTA, Telecommunications Regulatory Commission of Sri Lanka and the Independent Communication Authority of South Africa. European DECT products may not be used in the United States and Canada, as they likewise cause and suffer from interference with American and Canadian cellular networks, and use is prohibited by the Federal Communications Commission and Industry Canada.
DECT 8.0 HD is a marketing designation for North American DECT devices certified with CAT-iq 2.0 "Multi Line" profile.
NG-DECT/CAT-iq
Cordless Advanced Technology—internet and quality (CAT-iq) is a certification program maintained by the DECT Forum. It is based on New Generation DECT (NG-DECT) series of standards from ETSI.
NG-DECT/CAT-iq contains features that expand the generic GAP profile with mandatory support for high quality wideband voice, enhanced security, calling party identification, multiple lines, parallel calls, and similar functions to facilitate VoIP calls through SIP and H.323 protocols.
There are several CAT-iq profiles which define supported voice features:
CAT-iq 1.0 "HD Voice" (ETSI TS 102 527-1): wideband audio, calling party line and name identification (CLIP/CNAP)
CAT-iq 2.0 "Multi Line" (ETSI TS 102 527-3): multiple lines, line name, call waiting, call transfer, phonebook, call list, DTMF tones, headset, settings
CAT-iq 2.1 "Green" (ETSI TS 102 527-5): 3-party conference, call intrusion, caller blocking (CLIR), answering machine control, SMS, power-management
CAT-iq Data light data services, software upgrade over the air (SUOTA) (ETSI TS 102 527-4)
CAT-iq IOT Smart Home connectivity (IOT) with DECT Ultra Low Energy (ETSI TS 102 939)
CAT-iq allows any DECT handset to communicate with a DECT base from a different vendor, providing full interoperability. CAT-iq 2.0/2.1 feature set is designed to support IP-DECT base stations found in office IP-PBX and home gateways.
Technical features
The DECT standard specifies a means for a portable phone or "Portable Part" to access a fixed telephone network via radio. Base station or "Fixed Part" is used to terminate the radio link and provide access to a fixed line. A gateway is then used to connect calls to the fixed network, such as public switched telephone network (telephone jack), office PBX, ISDN, or VoIP over Ethernet connection.
Typical abilities of a domestic DECT Generic Access Profile (GAP) system include multiple handsets to one base station and one phone line socket. This allows several cordless telephones to be placed around the house, all operating from the same telephone jack. Additional handsets have a battery charger station that does not plug into the telephone system. Handsets can in many cases be used as intercoms, communicating between each other, and sometimes as walkie-talkies, intercommunicating without telephone line connection.
DECT operates in the 1880–1900 MHz band and defines ten frequency channels from 1881.792 MHz to 1897.344 MHz with a band gap of 1728 kHz.
DECT operates as a multicarrier frequency-division multiple access (FDMA) and time-division multiple access (TDMA) system. This means that the radio spectrum is divided into physical carriers in two dimensions: frequency and time. FDMA access provides up to 10 frequency channels, and TDMA access provides 24 time slots per every frame of 10ms. DECT uses time-division duplex (TDD), which means that down- and uplink use the same frequency but different time slots. Thus a base station provides 12 duplex speech channels in each frame, with each time slot occupying any available channel thus 10 × 12 = 120 carriers are available, each carrying 32 kbit/s.
DECT also provides frequency-hopping spread spectrum over TDMA/TDD structure for ISM band applications. If frequency-hopping is avoided, each base station can provide up to 120 channels in the DECT spectrum before frequency reuse. Each timeslot can be assigned to a different channel in order to exploit advantages of frequency hopping and to avoid interference from other users in asynchronous fashion.
DECT allows interference-free wireless operation to around outdoors. Indoor performance is reduced when interior spaces are constrained by walls.
DECT performs with fidelity in common congested domestic radio traffic situations. It is generally immune to interference from other DECT systems, Wi-Fi networks, video senders, Bluetooth technology, baby monitors and other wireless devices.
Technical properties
ETSI standards documentation ETSI EN 300 175 parts 1–8 (DECT), ETSI EN 300 444 (GAP) and ETSI TS 102 527 parts 1–5 (NG-DECT) prescribe the following technical properties:
Audio codec:
mandatory:
32kbit/s G.726 ADPCM (narrow band),
64kbit/s G.722 sub-band ADPCM (wideband)
optional:
64kbit/s G.711 μ-law/A-law PCM (narrow band),
32kbit/s G.729.1 (wideband),
32kbit/s MPEG-4 ER AAC-LD (wideband),
64kbit/s MPEG-4 ER AAC-LD (super-wideband)
Frequency: the DECT physical layer specifies RF carriers for the frequency ranges 1880 MHz to 1980 MHz and 2010 MHz to 2025 MHz, as well as 902 MHz to 928 MHz and 2400 MHz to 2483,5 MHz ISM band with frequency-hopping for the U.S. market. The most common spectrum allocation is 1880 MHz to 1900 MHz; outside Europe, 1900 MHz to 1920 MHz and 1910 MHz to 1930 MHz spectrum is available in several countries.
in Europe, as well as South Africa, Asia, Hong Kong, Australia, and New Zealand
in Korea
in Taiwan
(J-DECT) in Japan
in China (until 2003)
in Brazil
in Latin America
(DECT 6.0) in the United States and Canada
Carriers (1.728 MHz spacing):
10 channels in Europe and Latin America
8 channels in Taiwan
5 channels in the US, Brazil, Japan
3 channels in Korea
Time slots: 2 × 12 (up and down stream)
Channel allocation: dynamic
Average transmission power: 10 mW (250 mW peak) in Europe & Japan, 4 mW (100 mW peak) in the US
Physical layer
The DECT physical layer uses FDMA/TDMA access with TDD.
Gaussian frequency-shift keying (GFSK) modulation is used: the binary one is coded with a frequency increase by 288 kHz, and the binary zero with frequency decrease of 288 kHz. With high quality connections, 2-, 4- or 8-level Differential PSK modulation (DBPSK, DQPSK or D8PSK), which is similar to QAM-2, QAM-4 and QAM-8, can be used to transmit 1, 2, or 3 bits per each symbol. QAM-16 and QAM-64 modulations with 4 and 8 bits per symbol can be used for user data (B-field) only, with resulting transmission speeds of up to 5,068Mbit/s.
DECT provides dynamic channel selection and assignment; the choice of transmission frequency and time slot is always made by the mobile terminal. In case of interference in the selected frequency channel, the mobile terminal (possibly from suggestion by the base station) can initiate either intracell handover, selecting another channel/transmitter on the same base, or intercell handover, selecting a different base station altogether. For this purpose, DECT devices scan all idle channels at regular 30s intervals to generate a received signal strength indication (RSSI) list. When a new channel is required, the mobile terminal (PP) or base station (FP) selects a channel with the minimum interference from the RSSI list.
The maximum allowed power for portable equipment as well as base stations is 250 mW. A portable device radiates an average of about 10 mW during a call as it is only using one of 24 time slots to transmit. In Europe, the power limit was expressed as effective radiated power (ERP), rather than the more commonly used equivalent isotropically radiated power (EIRP), permitting the use of high-gain directional antennas to produce much higher EIRP and hence long ranges.
Data link layer
The DECT media access control layer controls the physical layer and provides connection oriented, connectionless and broadcast services to the higher layers.
The DECT data link layer uses Link Access Protocol Control (LAPC), a specially designed variant of the ISDN data link protocol called LAPD. They are based on HDLC.
GFSK modulation uses a bit rate of 1152 kbit/s, with a frame of 10ms (11520bits) which contains 24 time slots. Each slots contains 480 bits, some of which are reserved for physical packets and the rest is guard space. Slots 0–11 are always used for downlink (FP to PP) and slots 12–23 are used for uplink (PP to FP).
There are several combinations of slots and corresponding types of physical packets with GFSK modulation:
Basic packet (P32) 420 or 424 bits "full slot", used for normal speech transmission. User data (B-field) contains 320 bits.
Low-capacity packet (P00) 96 bits at the beginning of the time slot ("short slot"). This packet only contains 64-bit header (A-field) used as a dummy bearer to broadcast base station identification when idle.
Variable capacity packet (P00j) 100 + j or 104 + j bits, either two half-slots (0 ≤ j ≤ 136) or "long slot" (137 ≤ j ≤ 856). User data (B-field) contains j bits.
P64 (j = 640), P67 (j = 672) "long slot", used by NG-DECT/CAT-iq wideband voice and data.
High-capacity packet (P80) 900 or 904 bits, "double slot". This packet uses two time slots and always begins in an even time slot. The B-field is increased to 800 bits..
The 420/424 bits of a GFSK basic packet (P32) contain the following fields:
32 bits synchronization code (S-field): constant bit string AAAAE98AH for FP transmission, 55551675H for PP transmission
388 bits data (D-field), including
64 bits header (A-field): control traffic in logical channels C, M, N, P, and Q
320 bits user data (B-field): DECT payload, i.e. voice data
4 bits error-checking (X-field): CRC of the B-field
4 bits collision detection/channel quality (Z-field): optional, contains a copy of the X-field
The resulting full data rate is 32 kbit/s, available in both directions.
Network layer
The DECT network layer always contains the following protocol entities:
Call Control (CC)
Mobility Management (MM)
Optionally it may also contain others:
Call Independent Supplementary Services (CISS)
Connection Oriented Message Service (COMS)
Connectionless Message Service (CLMS)
All these communicate through a Link Control Entity (LCE).
The call control protocol is derived from ISDN DSS1, which is a Q.931-derived protocol. Many DECT-specific changes have been made.
The mobility management protocol includes the management of identities, authentication, location updating, on-air subscription and key allocation. It includes many elements similar to the GSM protocol, but also includes elements unique to DECT.
Unlike the GSM protocol, the DECT network specifications do not define cross-linkages between the operation of the entities (for example, Mobility Management and Call Control). The architecture presumes that such linkages will be designed into the interworking unit that connects the DECT access network to whatever mobility-enabled fixed network is involved. By keeping the entities separate, the handset is capable of responding to any combination of entity traffic, and this creates great flexibility in fixed network design without breaking full interoperability.
DECT GAP is an interoperability profile for DECT. The intent is that two different products from different manufacturers that both conform not only to the DECT standard, but also to the GAP profile defined within the DECT standard, are able to interoperate for basic calling. The DECT standard includes full testing suites for GAP, and GAP products on the market from different manufacturers are in practice interoperable for the basic functions.
Security
The DECT media access control layer includes authentication of handsets to the base station using the DECT Standard Authentication Algorithm (DSAA). When registering the handset on the base, both record a shared 128-bit Unique Authentication Key (UAK). The base can request authentication by sending two random numbers to the handset, which calculates the response using the shared 128-bit key. The handset can also request authentication by sending a 64-bit random number to the base, which chooses a second random number, calculates the response using the shared key, and sends it back with the second random number.
The standard also provides encryption services with the DECT Standard Cipher (DSC). The encryption is fairly weak, using a 35-bit initialization vector and encrypting the voice stream with 64-bit encryption. While most of the DECT standard is publicly available, the part describing the DECT Standard Cipher was only available under a non-disclosure agreement to the phones' manufacturers from ETSI.
The properties of the DECT protocol make it hard to intercept a frame, modify it and send it later again, as DECT frames are based on time-division multiplexing and need to be transmitted at a specific point in time. Unfortunately very few DECT devices on the market implemented authentication and encryption procedures and even when encryption was used by the phone, it was possible to implement a man-in-the-middle attack impersonating a DECT base station and revert to unencrypted mode which allows calls to be listened to, recorded, and re-routed to a different destination.
After an unverified report of a successful attack in 2002, members of the deDECTed.org project actually did reverse engineer the DECT Standard Cipher in 2008, and as of 2010 there has been a viable attack on it that can recover the key.
In 2012, an improved authentication algorithm, the DECT Standard Authentication Algorithm 2 (DSAA2), and improved version of the encryption algorithm, the DECT Standard Cipher 2 (DSC2), both based on AES 128-bit encryption, were included as optional in the NG-DECT/CAT-iq suite.
DECT Forum also launched the DECT Security certification program which mandates the use of previously optional security features in the GAP profile, such as early encryption and base authentication.
Profiles
Various access profiles have been defined in the DECT standard:
Public Access Profile (PAP) (deprecated)
Generic Access Profile (GAP) ETSI EN 300 444
Cordless Terminal Mobility (CTM) Access Profile (CAP) ETSI EN 300 824
Data access profiles
DECT Packet Radio System (DPRS) ETSI EN 301 649
DECT Multimedia Access Profile (DMAP)
Multimedia in the Local Loop Access Profile (MRAP)
Open Data Access Profile (ODAP)
Radio in the Local Loop (RLL) Access Profile (RAP) ETSI ETS 300 765
Interworking profiles (IWP)
DECT/ISDN Interworking Profile (IIP) ETSI EN 300 434
DECT/GSM Interworking Profile (GIP) ETSI EN 301 242
DECT/UMTS Interworking Profile (UIP) ETSI TS 101 863
DECT for data networks
Other interoperability profiles exist in the DECT suite of standards, and in particular the DPRS (DECT Packet Radio Services) bring together a number of prior interoperability profiles for the use of DECT as a wireless LAN and wireless internet access service. With good range (up to indoors and using directional antennae outdoors), dedicated spectrum, high interference immunity, open interoperability and data speeds of around 500 kbit/s, DECT appeared at one time to be a superior alternative to Wi-Fi. The protocol capabilities built into the DECT networking protocol standards were particularly good at supporting fast roaming in the public space, between hotspots operated by competing but connected providers. The first DECT product to reach the market, Olivetti's Net3, was a wireless LAN, and German firms Dosch & Amand and Hoeft & Wessel built niche businesses on the supply of data transmission systems based on DECT.
However, the timing of the availability of DECT, in the mid-1990s, was too early to find wide application for wireless data outside niche industrial applications. Whilst contemporary providers of Wi-Fi struggled with the same issues, providers of DECT retreated to the more immediately lucrative market for cordless telephones. A key weakness was also the inaccessibility of the U.S. market, due to FCC spectrum restrictions at that time. By the time mass applications for wireless Internet had emerged, and the U.S. had opened up to DECT, well into the new century, the industry had moved far ahead in terms of performance and DECT's time as a technically competitive wireless data transport had passed.
Health and safety
DECT uses UHF radio, similar to mobile phones, baby monitors, Wi-Fi, and other cordless telephone technologies. The UK Health Protection Agency (HPA) claims that due to a mobile phone's adaptive power ability, a DECT cordless phone's radiation could actually exceed the radiation of a mobile phone. A DECT cordless phone's radiation has an average output power of 10 mW but is in the form of 100 bursts per second of 250 mW, a strength comparable to some mobile phones.
Most studies have been unable to demonstrate any link to health effects, or have been inconclusive. Electromagnetic fields may have an effect on protein expression in laboratory settings but have not yet been demonstrated to have clinically significant effects in real-world settings. The World Health Organization has issued a statement on medical effects of mobile phones which acknowledges that the longer term effects (over several decades) require further research.
See also
GSM Interworking Profile (GIP)
IP-DECT
CT2 (DECT's predecessor in Europe)
Net3
CorDECT
WDECT
Unlicensed Personal Communications Services
Microcell
Wireless local loop
References
Standards
ETSI EN 300 175 V2.8.5 (2021-12). Digital Enhanced Cordless Telecommunications (DECT) Common Interface (CI)
ETSI EN 300 175-1. Part 1: Overview
ETSI EN 300 175-2. Part 2: Physical Layer (PHL)
ETSI EN 300 175-3. Part 3: Medium Access Control (MAC) layer
ETSI EN 300 175-4. Part 4: Data Link Control (DLC) layer
ETSI EN 300 175-5. Part 5: Network (NWK) layer
ETSI EN 300 175-6. Part 6: Identities and addressing
ETSI EN 300 175-7. Part 7: Security features
ETSI EN 300 175-8. Part 8: Speech and audio coding and transmission
ETSI TS 102 939. Digital Enhanced Cordless Telecommunications (DECT) Ultra Low Energy (ULE) Machine to Machine Communications
ETSI TS 102 939-1 V1.3.1 (2017-10). Part 1: Home Automation Network (phase 1)
ETSI TS 102 939-2 V1.3.1 (2019-01). Part 2: Home Automation Network (phase 2)
ETSI TS 102 527. Digital Enhanced Cordless Telecommunications (DECT) New Generation DECT
ETSI TS 102 527-1 V1.5.1 (2019-08). Part 1: Wideband speech
ETSI TS 102 527-2 V1.1.1 (2007-06). Part 2: Support of transparent IP packet data
ETSI TS 102 527-3 V1.7.1 (2019-08). Part 3: Extended wideband speech services
ETSI TS 102 527-4 V1.3.1 (2015-11). Part 4: Light Data Services; Software Update Over The Air (SUOTA), content downloading and HTTP based applications
ETSI TS 102 527-5 V1.3.1 (2019-08). Part 5: Additional feature set nr. 1 for extended wideband speech services
ETSI TS 103 636 v1.3.1 (2021-12). DECT-2020 New Radio (NR)
ETSI TS 103 636-1. Part 1: Overview
ETSI TS 103 636-2. Part 2: Radio reception and transmission requirements
ETSI TS 103 636-3. Part 3: Physical layer
ETSI TS 103 636-4. Part 4: MAC layer
ETSI TS 103 636-5. Part 5: DLC and Convergence layer
Digital Enhanced Cordless Telecommunications (DECT)
ETSI TS 103 634 V1.3.1 (2021-10). Low Complexity Communication Codec plus (LC3plus)
ETSI EN 300 444 V2.5.1 (2017-10). Generic Access Profile (GAP)
ETSI TS 103 706 V1.1.1 (2022-01). Advanced Audio Profile
ETSI EN 300 824 V1.3.1 (2001-08). Cordless Terminal Mobility (CTM) – CTM Access Profile (CAP)
ETSI EN 300 700 V2.2.1 (2018-12). Wireless Relay Station (WRS)
ETSI EN 301 649 V2.3.1 (2015-03). DECT Packet Radio Service (DPRS)
ETSI EN 300 757 V1.5.1 (2004-09). Low Rate Messaging Service (LRMS) including Short Messaging Service (SMS)
Further reading
Technical Report: Multicell Networks based on DECT and CAT-iq . Dosch & Amand Research
External links
DECT Forum at dect.org
DECT information at ETSI
DECTWeb.com
Open source implementation of a DECT stack
Broadband
Local loop
Mobile telecommunications standards
Software-defined radio
Wireless communication systems |
8735 | https://en.wikipedia.org/wiki/BIND | BIND | BIND () is a suite of software for interacting with the Domain Name System (DNS). Its most prominent component, named (pronounced name-dee: , short for name daemon), performs both of the main DNS server roles, acting as an authoritative name server for DNS zones and as a recursive resolver in the network. As of 2015, it is the most widely used domain name server software, and is the de facto standard on Unix-like operating systems. Also contained in the suite are various administration tools such as nsupdate and dig, and a DNS resolver interface library.
The software was originally designed at the University of California, Berkeley (UCB) in the early 1980s. The name originates as an acronym of Berkeley Internet Name Domain, reflecting the application's use within UCB. The latest version is BIND 9, first released in 2000 and still actively maintained by the Internet Systems Consortium (ISC) with new releases issued several times a year.
Key features
BIND 9 is intended to be fully compliant with the IETF DNS standards and draft standards. Important features of BIND 9 include: TSIG, nsupdate, IPv6, RNDC (remote name daemon control), views, multiprocessor support, Response Rate Limiting (RRL), DNSSEC, and broad portability. RNDC enables remote configuration updates, using a shared secret to provide encryption for local and remote terminals during each session.
Database support
While earlier versions of BIND offered no mechanism to store and retrieve zone data in anything other than flat text files, in 2007 BIND 9.4 DLZ provided a compile-time option for zone storage in a variety of database formats including LDAP, Berkeley DB, PostgreSQL, MySQL, and ODBC.
BIND 10 planned to make the data store modular, so that a variety of databases may be connected.
In 2016 ISC added support for the 'dyndb' interface, contributed by RedHat, with BIND version 9.11.0.
Security
Security issues that are discovered in BIND 9 are patched and publicly disclosed in keeping with common principles of open source software. A complete list of security defects that have been discovered and disclosed in BIND9 is maintained by Internet Systems Consortium, the current authors of the software.
The BIND 4 and BIND 8 releases both had serious security vulnerabilities. Use of these ancient versions, or any un-maintained, non-supported version is strongly discouraged. BIND 9 was a complete rewrite, in part to mitigate these ongoing security issues. The downloads page on the ISC web site clearly shows which versions are currently maintained and which are end of life.
History
BIND was originally written by four graduate students at the Computer Systems Research Group (CSRG) at the University of California, Berkeley, Douglas Terry, Mark Painter, David Riggle and Songnian Zhou, in the early 1980s as a result of a DARPA grant. The acronym BIND is for Berkeley Internet Name Domain, from a technical paper published in 1984. It was first released with Berkeley Software Distribution 4.3BSD.
Versions of BIND through 4.8.3 were maintained by the CSRG.
Paul Vixie of Digital Equipment Corporation (DEC) took over BIND development in 1988, releasing versions 4.9 and 4.9.1. Vixie continued to work on BIND after leaving DEC. BIND Version 4.9.2 was sponsored by Vixie Enterprises. Vixie eventually founded the Internet Software Consortium (ISC), which became the entity responsible for BIND versions starting with 4.9.3.
BIND 8 was released by ISC in May 1997.
Version 9 was developed by Nominum, Inc. under an ISC outsourcing contract, and the first version was released 9 October 2000. It was written from scratch in part to address the architectural difficulties with auditing the earlier BIND code bases, and also to support DNSSEC (DNS Security Extensions). The development of BIND 9 took place under a combination of commercial and military contracts. Most of the features of BIND 9 were funded by UNIX vendors who wanted to ensure that BIND stayed competitive with Microsoft's DNS offerings; the DNSSEC features were funded by the US military, which regarded DNS security as important. BIND 9 was released in September 2000.
In 2009, ISC started an effort to develop a new version of the software suite, initially called BIND10. In addition to DNS service, the BIND10 suite also included IPv4 and IPv6 DHCP server components. In April 2014, with BIND10 release 1.2.0 the ISC concluded its involvement in the project and renamed it to Bundy, moving the source code repository to GitHub for further development by outside public efforts. ISC discontinued its involvement in the project due to cost-cutting measures. The development of DHCP components was split off to become a new Kea project.
See also
Comparison of DNS server software
DNS management software
Zone file
References
Further reading
External links
The official BIND site at Internet Systems Consortium (ISC.org)
The BIND Gitlab repo and issue tracker
History of BIND
BIND Release Strategy
Bundy Project
Create new BIND zonefile
Geo-IP Info graphic
DNS software
Free network-related software
Software using the ISC license |
8844 | https://en.wikipedia.org/wiki/Digital%20cinema | Digital cinema | Digital cinema refers to adoption of digital technology within the film industry to distribute or project motion pictures as opposed to the historical use of reels of motion picture film, such as 35 mm film. Whereas film reels have to be shipped to movie theaters, a digital movie can be distributed to cinemas in a number of ways: over the Internet or dedicated satellite links, or by sending hard drives or optical discs such as Blu-ray discs.
Digital movies are projected using a digital video projector instead of a film projector, are shot using digital movie cameras and edited using a non-linear editing system (NLE). The NLE is often a video editing application installed in one or more computers that may be networked to access the original footage from a remote server, share or gain access to computing resources for rendering the final video, and to allow several editors to work on the same timeline or project.
Alternatively a digital movie could be a film reel that has been digitized using a motion picture film scanner and then restored, or, a digital movie could be recorded using a film recorder onto film stock for projection using a traditional film projector.
Digital cinema is distinct from high-definition television and does not necessarily use traditional television or other traditional high-definition video standards, aspect ratios, or frame rates. In digital cinema, resolutions are represented by the horizontal pixel count, usually 2K (2048×1080 or 2.2 megapixels) or 4K (4096×2160 or 8.8 megapixels). The 2K and 4K resolutions used in digital cinema projection are often referred to as DCI 2K and DCI 4K. DCI stands for Digital Cinema Initiatives.
As digital-cinema technology improved in the early 2010s, most theaters across the world converted to digital video projection.
History
The transition from film to digital video was preceded by cinema's transition from analog to digital audio, with the release of the Dolby Digital (AC-3) audio coding standard in 1991. Its main basis is the modified discrete cosine transform (MDCT), a lossy audio compression algorithm. It is a modification of the discrete cosine transform (DCT) algorithm, which was first proposed by Nasir Ahmed in 1972 and was originally intended for image compression. The DCT was adapted into the MDCT by J.P. Princen, A.W. Johnson and Alan B. Bradley at the University of Surrey in 1987, and then Dolby Laboratories adapted the MDCT algorithm along with perceptual coding principles to develop the AC-3 audio format for cinema needs. Cinema in the 1990s typically combined analog video with digital audio.
Digital media playback of high-resolution 2K files has at least a 20-year history. Early video data storage units (RAIDs) fed custom frame buffer systems with large memories. In early digital video units, content was usually restricted to several minutes of material. Transfer of content between remote locations was slow and had limited capacity. It was not until the late 1990s that feature-length films could be sent over the "wire" (Internet or dedicated fiber links). On October 23, 1998, Digital Light Processing (DLP) projector technology was publicly demonstrated with the release of The Last Broadcast, the first feature-length movie, shot, edited and distributed digitally. In conjunction with Texas Instruments, the movie was publicly demonstrated in five theaters across the United States (Philadelphia, Portland (Oregon), Minneapolis, Providence, and Orlando).
Foundations
In the United States, on June 18, 1999, Texas Instruments' DLP Cinema projector technology was publicly demonstrated on two screens in Los Angeles and New York for the release of Lucasfilm's Star Wars Episode I: The Phantom Menace. In Europe, on February 2, 2000, Texas Instruments' DLP Cinema projector technology was publicly demonstrated, by Philippe Binant, on one screen in Paris for the release of Toy Story 2.
From 1997 to 2000, the JPEG 2000 image compression standard was developed by a Joint Photographic Experts Group (JPEG) committee chaired by Touradj Ebrahimi (later the JPEG president). In contrast to the original 1992 JPEG standard, which is a DCT-based lossy compression format for static digital images, JPEG 2000 is a discrete wavelet transform (DWT) based compression standard that could be adapted for motion imaging video compression with the Motion JPEG 2000 extension. JPEG 2000 technology was later selected as the video coding standard for digital cinema in 2004.
Initiatives
On January 19, 2000, the Society of Motion Picture and Television Engineers, in the United States, initiated the first standards group dedicated towards developing digital cinema. By December 2000, there were 15 digital cinema screens in the United States and Canada, 11 in Western Europe, 4 in Asia, and 1 in South America. Digital Cinema Initiatives (DCI) was formed in March 2002 as a joint project of many motion picture studios (Disney, Fox, MGM, Paramount, Sony Pictures, Universal and Warner Bros.) to develop a system specification for digital cinema.
In April 2004, in cooperation with the American Society of Cinematographers, DCI created standard evaluation material (the ASC/DCI StEM material) for testing of 2K and 4K playback and compression technologies. DCI selected JPEG 2000 as the basis for the compression in the system the same year. Initial tests with JPEG 2000 produced bit rates of around 75125 Mbit/s for 2K resolution and 100200 Mbit/s for 4K resolution.
Worldwide deployment
In China, in June 2005, an e-cinema system called "dMs" was established and was used in over 15,000 screens spread across China's 30 provinces. dMs estimated that the system would expand to 40,000 screens in 2009. In 2005 the UK Film Council Digital Screen Network launched in the UK by Arts Alliance Media creating a chain of 250 2K digital cinema systems. The roll-out was completed in 2006. This was the first mass roll-out in Europe. AccessIT/Christie Digital also started a roll-out in the United States and Canada. By mid 2006, about 400 theaters were equipped with 2K digital projectors with the number increasing every month. In August 2006, the Malayalam digital movie Moonnamathoral, produced by Benzy Martin, was distributed via satellite to cinemas, thus becoming the first Indian digital cinema. This was done by Emil and Eric Digital Films, a company based at Thrissur using the end-to-end digital cinema system developed by Singapore-based DG2L Technologies.
In January 2007, Guru became the first Indian film mastered in the DCI-compliant JPEG 2000 Interop format and also the first Indian film to be previewed digitally, internationally, at the Elgin Winter Garden in Toronto. This film was digitally mastered at Real Image Media Technologies in India. In 2007, the UK became home to Europe's first DCI-compliant fully digital multiplex cinemas; Odeon Hatfield and Odeon Surrey Quays (in London), with a total of 18 digital screens, were launched on 9 February 2007. By March 2007, with the release of Disney's Meet the Robinsons, about 600 screens had been equipped with digital projectors. In June 2007, Arts Alliance Media announced the first European commercial digital cinema Virtual Print Fee (VPF) agreements (with 20th Century Fox and Universal Pictures). In March 2009 AMC Theatres announced that it closed a $315 million deal with Sony to replace all of its movie projectors with 4K digital projectors starting in the second quarter of 2009; it was anticipated that this replacement would be finished by 2012.
In January 2011, the total number of digital screens worldwide was 36,242, up from 16,339 at end 2009 or a growth rate of 121.8 percent during the year. There were 10,083 d-screens in Europe as a whole (28.2 percent of global figure), 16,522 in the United States and Canada (46.2 percent of global figure) and 7,703 in Asia (21.6 percent of global figure). Worldwide progress was slower as in some territories, particularly Latin America and Africa. As of 31 March 2015, 38,719 screens (out of a total of 39,789 screens) in the United States have been converted to digital, 3,007 screens in Canada have been converted, and 93,147 screens internationally have been converted. At the end of 2017, virtually all of the world's cinema screens were digital (98%).
Despite the fact that today, virtually all global movie theaters have converted their screens to digital cinemas, some major motion pictures even as of 2019 are shot on film. For example, Quentin Tarantino released his latest film Once Upon a Time in Hollywood in 70 mm and 35 mm in selected theaters across the United States and Canada.
Elements
In addition to the equipment already found in a film-based movie theatre (e.g., a sound reinforcement system, screen, etc.), a DCI-compliant digital cinema requires a digital projector and a powerful computer known as a server. Movies are supplied to the theatre as a digital file called a Digital Cinema Package (DCP). For a typical feature film, this file will be anywhere between 90 GB and 300 GB of data (roughly two to six times the information of a Blu-ray disc) and may arrive as a physical delivery on a conventional computer hard drive or via satellite or fibre-optic broadband Internet. As of 2013, physical deliveries of hard drives were most common in the industry. Promotional trailers arrive on a separate hard drive and range between 200 GB and 400 GB in size. The contents of the hard drive(s) may be encrypted.
Regardless of how the DCP arrives, it first needs to be copied onto the internal hard drives of the server, usually via a USB port, a process known as "ingesting". DCPs can be, and in the case of feature films almost always are, encrypted, to prevent illegal copying and piracy. The necessary decryption keys are supplied separately, usually as email attachments and then "ingested" via USB. Keys are time-limited and will expire after the end of the period for which the title has been booked. They are also locked to the hardware (server and projector) that is to screen the film, so if the theatre wishes to move the title to another screen or extend the run, a new key must be obtained from the distributor. Several versions of the same feature can be sent together. The original version (OV) is used as the basis of all the other playback options. Version files (VF) may have a different sound format (e.g. 7.1 as opposed to 5.1 surround sound) or subtitles. 2D and 3D versions are often distributed on the same hard drive.
The playback of the content is controlled by the server using a "playlist". As the name implies, this is a list of all the content that is to be played as part of the performance. The playlist will be created by a member of the theatre's staff using proprietary software that runs on the server. In addition to listing the content to be played the playlist also includes automation cues that allow the playlist to control the projector, the sound system, auditorium lighting, tab curtains and screen masking (if present), etc. The playlist can be started manually, by clicking the "play" button on the server's monitor screen, or automatically at pre-set times.
Technology and standards
Digital Cinema Initiatives
Digital Cinema Initiatives (DCI), a joint venture of the six major studios, published the first version (V1.0) of a system specification for digital cinema in July 2005. The main declared objectives of the specification were to define a digital cinema system that would "present a theatrical experience that is better than what one could achieve now with a traditional 35mm Answer Print", to provide global standards for interoperability such that any DCI-compliant content could play on any DCI-compliant hardware anywhere in the world and to provide robust protection for the intellectual property of the content providers.
The DCI specification calls for picture encoding using the ISO/IEC 15444-1 "JPEG2000" (.j2c) standard and use of the CIE XYZ color space at 12 bits per component encoded with a 2.6 gamma applied at projection. Two levels of resolution for both content and projectors are supported: 2K (2048×1080) or 2.2 MP at 24 or 48 frames per second, and 4K (4096×2160) or 8.85 MP at 24 frames per second. The specification ensures that 2K content can play on 4K projectors and vice versa. Smaller resolutions in one direction are also supported (the image gets automatically centered). Later versions of the standard added additional playback rates (like 25 fps in SMPTE mode). For the sound component of the content the specification provides for up to 16 channels of uncompressed audio using the "Broadcast Wave" (.wav) format at 24 bits and 48 kHz or 96 kHz sampling.
Playback is controlled by an XML-format Composition Playlist, into an MXF-compliant file at a maximum data rate of 250 Mbit/s. Details about encryption, key management, and logging are all discussed in the specification as are the minimum specifications for the projectors employed including the color gamut, the contrast ratio and the brightness of the image. While much of the specification codifies work that had already been ongoing in the Society of Motion Picture and Television Engineers (SMPTE), the specification is important in establishing a content owner framework for the distribution and security of first-release motion-picture content.
National Association of Theatre Owners
In addition to DCI's work, the National Association of Theatre Owners (NATO) released its Digital Cinema System Requirements. The document addresses the requirements of digital cinema systems from the operational needs of the exhibitor, focusing on areas not addressed by DCI, including access for the visually impaired and hearing impaired, workflow inside the cinema, and equipment interoperability. In particular, NATO's document details requirements for the Theatre Management System (TMS), the governing software for digital cinema systems within a theatre complex, and provides direction for the development of security key management systems. As with DCI's document, NATO's document is also important to the SMPTE standards effort.
E-Cinema
The Society of Motion Picture and Television Engineers (SMPTE) began work on standards for digital cinema in 2000. It was clear by that point in time that HDTV did not provide a sufficient technological basis for the foundation of digital cinema playback. In Europe, India and Japan however, there is still a significant presence of HDTV for theatrical presentations. Agreements within the ISO standards body have led to these non-compliant systems being referred to as Electronic Cinema Systems (E-Cinema).
Projectors for digital cinema
Only three manufacturers make DCI-approved digital cinema projectors; these are Barco, Christie and NEC. Except for Sony, who used to use their own SXRD technology, all use the Digital Light Processing (DLP) technology developed by Texas Instruments (TI). D-Cinema projectors are similar in principle to digital projectors used in industry, education, and domestic home cinemas, but differ in two important respects. First, projectors must conform to the strict performance requirements of the DCI specification. Second, projectors must incorporate anti-piracy devices intended to enforce copyright compliance such as licensing limits. For these reasons all projectors intended to be sold to theaters for screening current release movies must be approved by the DCI before being put on sale. They now pass through a process called CTP (compliance test plan). Because feature films in digital form are encrypted and the decryption keys (KDMs) are locked to the serial number of the server used (linking to both the projector serial number and server is planned in the future), a system will allow playback of a protected feature only with the required KDM.
DLP Cinema
Three manufacturers have licensed the DLP Cinema technology developed by Texas Instruments (TI): Christie Digital Systems, Barco, and NEC. While NEC is a relative newcomer to Digital Cinema, Christie is the main player in the U.S. and Barco takes the lead in Europe and Asia. Initially DCI-compliant DLP projectors were available in 2K only, but from early 2012, when TI's 4K DLP chip went into full production, DLP projectors have been available in both 2K and 4K versions. Manufacturers of DLP-based cinema projectors can now also offer 4K upgrades to some of the more recent 2K models. Early DLP Cinema projectors, which were deployed primarily in the United States, used limited 1280×1024 resolution or the equivalent of 1.3 MP (megapixels). Digital Projection Incorporated (DPI) designed and sold a few DLP Cinema units (is8-2K) when TI's 2K technology debuted but then abandoned the D-Cinema market while continuing to offer DLP-based projectors for non-cinema purposes. Although based on the same 2K TI "light engine" as those of the major players they are so rare as to be virtually unknown in the industry. They are still widely used for pre-show advertising but not usually for feature presentations.
TI's technology is based on the use of digital micromirror devices (DMDs). These are MEMS devices that are manufactured from silicon using similar technology to that of computer chips. The surface of these devices is covered by a very large number of microscopic mirrors, one for each pixel, so a 2K device has about 2.2 million mirrors and a 4K device about 8.8 million. Each mirror vibrates several thousand times a second between two positions: In one, light from the projector's lamp is reflected towards the screen, in the other away from it. The proportion of the time the mirror is in each position varies according to the required brightness of each pixel. Three DMD devices are used, one for each of the primary colors. Light from the lamp, usually a Xenon arc lamp similar to those used in film projectors with a power between 1 kW and 7 kW, is split by colored filters into red, green and blue beams which are directed at the appropriate DMD. The 'forward' reflected beam from the three DMDs is then re-combined and focused by the lens onto the cinema screen.
Sony SXRD
Alone amongst the manufacturers of DCI-compliant cinema projectors Sony decided to develop its own technology rather than use TI's DLP technology. SXRD (Silicon X-tal (Crystal) Reflective Display) projectors have only ever been manufactured in 4K form and, until the launch of the 4K DLP chip by TI, Sony SXRD projectors were the only 4K DCI-compatible projectors on the market. Unlike DLP projectors, however, SXRD projectors do not present the left and right eye images of stereoscopic movies sequentially, instead they use half the available area on the SXRD chip for each eye image. Thus during stereoscopic presentations the SXRD projector functions as a sub 2K projector, the same for HFR 3D Content.
However, Sony decided in late April, 2020 that they would no longer manufacture digital cinema projectors.
Stereo 3D images
In late 2005, interest in digital 3-D stereoscopic projection led to a new willingness on the part of theaters to co-operate in installing 2K stereo installations to show Disney's Chicken Little in 3-D film. Six more digital 3-D movies were released in 2006 and 2007 (including Beowulf, Monster House and Meet the Robinsons). The technology combines a single digital projector fitted with either a polarizing filter (for use with polarized glasses and silver screens), a filter wheel or an emitter for LCD glasses. RealD uses a "ZScreen" for polarisation and MasterImage uses a filter wheel that changes the polarity of projector's light output several times per second to alternate quickly the left-and-right-eye views. Another system that uses a filter wheel is Dolby 3D. The wheel changes the wavelengths of the colours being displayed, and tinted glasses filter these changes so the incorrect wavelength cannot enter the wrong eye. XpanD makes use of an external emitter that sends a signal to the 3D glasses to block out the wrong image from the wrong eye.
Laser
RGB laser projection produces the purest BT.2020 colors and the brightest images.
LED screen for digital cinema
In Asia, on July 13, 2017, an LED screen for digital cinema developed by Samsung Electronics was publicly demonstrated on one screen at Lotte Cinema World Tower in Seoul. First installation in Europe is in Arena Sihlcity Cinema in Zürich. These displays do not use a projector; instead they use a MicroLED video wall, and can offer higher contrast ratios, higher resolutions, and overall improvements in image quality. MicroLED allows for the elimination of display bezels, creating the illusion of a single large screen. This is possible due to the large amount of spacing in between pixels in MicroLED displays. Sony already sells MicroLED displays as a replacement for conventional cinema screens.
Effect on distribution
Digital distribution of movies has the potential to save money for film distributors. To print an 80-minute feature film can cost US$1,500 to $2,500, so making thousands of prints for a wide-release movie can cost millions of dollars. In contrast, at the maximum 250 megabit-per-second data rate (as defined by DCI for digital cinema), a feature-length movie can be stored on an off-the-shelf 300 GB hard drive for $50 and a broad release of 4000 'digital prints' might cost $200,000. In addition hard drives can be returned to distributors for reuse. With several hundred movies distributed every year, the industry saves billions of dollars. The digital-cinema roll-out was stalled by the slow pace at which exhibitors acquired digital projectors, since the savings would be seen not by themselves but by distribution companies. The Virtual Print Fee model was created to address this by passing some of the saving on to the cinemas. As a consequence of the rapid conversion to digital projection, the number of theatrical releases exhibited on film is dwindling. As of 4 May 2014, 37,711 screens (out of a total of 40,048 screens) in the United States have been converted to digital, 3,013 screens in Canada have been converted, and 79,043 screens internationally have been converted.
Telecommunication
Realization and demonstration, on October 29, 2001, of the first digital cinema transmission by satellite in Europe of a feature film by Bernard Pauchon, Alain Lorentz, Raymond Melwig and Philippe Binant.
Live broadcasting to cinemas
Digital cinemas can deliver live broadcasts from performances or events. This began initially with live broadcasts from the New York Metropolitan Opera delivering regular live broadcasts into cinemas and has been widely imitated ever since. Leading territories providing the content are the UK, the US, France and Germany. The Royal Opera House, Sydney Opera House, English National Opera and others have found new and returning audiences captivated by the detail offered by a live digital broadcast featuring handheld and cameras on cranes positioned throughout the venue to capture the emotion that might be missed in a live venue situation. In addition these providers all offer additional value during the intervals e.g. interviews with choreographers, cast members, a backstage tour which would not be on offer at the live event itself. Other live events in this field include live theatre from NT Live, Branagh Live, Royal Shakespeare Company, Shakespeare's Globe, the Royal Ballet, Mariinsky Ballet, the Bolshoi Ballet and the Berlin Philharmoniker.
In the last ten years this initial offering of the arts has also expanded to include live and recorded music events such as Take That Live, One Direction Live, Andre Rieu, live musicals such as the recent Miss Saigon and a record-breaking Billy Elliot Live In Cinemas. Live sport, documentary with a live question and answer element such as the recent Oasis documentary, lectures, faith broadcasts, stand-up comedy, museum and gallery exhibitions, TV specials such as the record-breaking Doctor Who fiftieth anniversary special The Day Of The Doctor, have all contributed to creating a valuable revenue stream for cinemas large and small all over the world. Subsequently, live broadcasting, formerly known as Alternative Content, has become known as Event Cinema and a trade association now exists to that end. Ten years on the sector has become a sizeable revenue stream in its own right, earning a loyal following amongst fans of the arts, and the content limited only by the imagination of the producers it would seem. Theatre, ballet, sport, exhibitions, TV specials and documentaries are now established forms of Event Cinema. Worldwide estimations put the likely value of the Event Cinema industry at $1bn by 2019.
Event Cinema currently accounts for on average between 1-3% of overall box office for cinemas worldwide but anecdotally it's been reported that some cinemas attribute as much as 25%, 48% and even 51% (the Rio Bio cinema in Stockholm) of their overall box office. It is envisaged ultimately that Event Cinema will account for around 5% of the overall box office globally. Event Cinema saw six worldwide records set and broken over from 2013 to 2015 with notable successes Dr Who ($10.2m in three days at the box office - event was also broadcast on terrestrial TV simultaneously), Pompeii Live by the British Museum, Billy Elliot, Andre Rieu, One Direction, Richard III by the Royal Shakespeare Company.
Event Cinema is defined more by the frequency of events rather than by the content itself. Event Cinema events typically appear in cinemas during traditionally quieter times in the cinema week such as the Monday-Thursday daytime/evening slot and are characterised by the One Night Only release, followed by one or possibly more 'Encore' releases a few days or weeks later if the event is successful and sold out. On occasion more successful events have returned to cinemas some months or even years later in the case of NT Live where the audience loyalty and company branding is so strong the content owner can be assured of a good showing at the box office.
Pros and cons
Pros
The digital formation of sets and locations, especially in the time of growing film series and sequels, is that virtual sets, once computer generated and stored, can be easily revived for future films.
Considering digital film images are documented as data files on hard disk or flash memory, varying systems of edits can be executed with the alteration of a few settings on the editing console with the structure being composed virtually in the computer's memory. A broad choice of effects can be sampled simply and rapidly, without the physical constraints posed by traditional cut-and-stick editing. Digital cinema allows national cinemas to construct films specific to their cultures in ways that the more constricting configurations and economics of customary film-making prevented. Low-cost cameras and computer-based editing software have gradually enabled films to be produced for minimal cost. The ability of digital cameras to allow film-makers to shoot limitless footage without wasting pricey celluloid has transformed film production in some Third World countries. From consumers' perspective digital prints don't deteriorate with the number of showings. Unlike celluloid film, there is no projection mechanism or manual handling to add scratches or other physically generated artefacts. Provincial cinemas that would have received old prints can give consumers the same cinematographic experience (all other things being equal) as those attending the premiere.
The use of NLEs in movies allows for edits and cuts to be made non-destructively, without actually discarding any footage.
Cons
A number of high-profile film directors, including Christopher Nolan, Paul Thomas Anderson, David O. Russell and Quentin Tarantino have publicly criticized digital cinema and advocated the use of film and film prints. Most famously, Tarantino has suggested he may retire because, though he can still shoot on film, because of the rapid conversion to digital, he cannot project from 35 mm prints in the majority of American cinemas. Steven Spielberg has stated that though digital projection produces a much better image than film if originally shot in digital, it is "inferior" when it has been converted to digital. He attempted at one stage to release Indiana Jones and the Kingdom of the Crystal Skull solely on film. Paul Thomas Anderson recently was able to create 70-mm film prints for his film The Master .
Film critic Roger Ebert criticized the use of DCPs after a cancelled film festival screening of Brian DePalma's film Passion at New York Film Festival as a result of a lockup due to the coding system.
The theoretical resolution of 35 mm film is greater than that of 2K digital cinema. 2K resolution (2048×1080) is also only slightly greater than that of consumer based 1080p HD (1920x1080). However, since digital post-production techniques became the standard in the early 2000s, the majority of movies, whether photographed digitally or on 35 mm film, have been mastered and edited at the 2K resolution. Moreover, 4K post production was becoming more common as of 2013. As projectors are replaced with 4K models the difference in resolution between digital and 35 mm film is somewhat reduced. Digital cinema servers utilize far greater bandwidth over domestic "HD", allowing for a difference in quality (e.g., Blu-ray colour encoding 4:2:0 48 Mbit/s MAX datarate, DCI D-Cinema 4:4:4 250 Mbit/s 2D/3D, 500 Mbit/s HFR3D). Each frame has greater detail.
Owing to the smaller dynamic range of digital cameras, correcting poor digital exposures is more difficult than correcting poor film exposures during post-production. A partial solution to this problem is to add complex video-assist technology during the shooting process. However, such technologies are typically available only to high-budget production companies. Digital cinemas' efficiency of storing images has a downside. The speed and ease of modern digital editing processes threatens to give editors and their directors, if not an embarrassment of choice then at least a confusion of options, potentially making the editing process, with this 'try it and see' philosophy, lengthier rather than shorter. Because the equipment needed to produce digital feature films can be obtained more easily than celluloid, producers could inundate the market with cheap productions and potentially dominate the efforts of serious directors. Because of the quick speed in which they are filmed, these stories sometimes lack essential narrative structure.
The projectors used for celluloid film were largely the same technology as when film/movies were invented over 100 years ago. The evolutions of adding sound and wide screen could largely be accommodated by bolting on sound decoders, and changing lenses. This well proven and understood technology had several advantages 1) The life of a mechanical projector of around 35 years 2) a mean time between failures (MTBF) of 15 years and 3) an average repair time of 15 minutes (often done by the projectionist). On the other hand, digital projectors are around 10 times more expensive, have a much shorter life expectancy due to the developing technology (already technology has moved from 2K to 4K) so the pace of obsolescence is higher. The MTBF has not yet been established, but the ability for the projectionist to effect a quick repair is gone.
Costs
Pros
The electronic transferring of digital film, from central servers to servers in cinema projection booths, is an inexpensive process of supplying copies of newest releases to the vast number of cinema screens demanded by prevailing saturation-release strategies. There is a significant saving on print expenses in such cases: at a minimum cost per print of $1200–2000, the cost of celluloid print production is between $5–8 million per film. With several thousand releases a year, the probable savings offered by digital distribution and projection are over $1 billion. The cost savings and ease, together with the ability to store film rather than having to send a print on to the next cinema, allows a larger scope of films to be screened and watched by the public; minority and small-budget films that would not otherwise get such a chance.
Cons
The initial costs for converting theaters to digital are high: $100,000 per screen, on average. Theaters have been reluctant to switch without a cost-sharing arrangement with film distributors. A solution is a temporary Virtual Print Fee system, where the distributor (who saves the money of producing and transporting a film print) pays a fee per copy to help finance the digital systems of the theaters. A theater can purchase a film projector for as little as $10,000 (though projectors intended for commercial cinemas cost two to three times that; to which must be added the cost of a long-play system, which also costs around $10,000, making a total of around $30,000–$40,000) from which they could expect an average life of 30–40 years. By contrast, a digital cinema playback system—including server, media block, and projector—can cost two to three times as much, and would have a greater risk of component failure and obsolescence. (In Britain the cost of an entry level projector including server, installation, etc., would be £31,000 [$50,000].)
Archiving digital masters has also turned out to be both tricky and costly. In a 2007 study, the Academy of Motion Picture Arts and Sciences found the cost of long-term storage of 4K digital masters to be "enormously higher—up to 11 times that of the cost of storing film masters." This is because of the limited or uncertain lifespan of digital storage: No current digital medium—be it optical disc, magnetic hard drive or digital tape—can reliably store a motion picture for as long as a hundred years or more (something that film—properly stored and handled—does very well). The short history of digital storage media has been one of innovation and, therefore, of obsolescence. Archived digital content must be periodically removed from obsolete physical media to up-to-date media. The expense of digital image capture is not necessarily less than the capture of images onto film; indeed, it is sometimes greater.
See also
JPEG 2000
3D film
4K resolution
Digital cinematography
Digital projector
Digital intermediate
Digital Cinema Initiatives
Display resolution
Digital 3D
Color suite
List of film-related topics (extensive alphabetical listing)
References
Bibliography
Charles S. Swartz (editor), Understanding digital cinema. A professional handbook, Elseiver / Focal Press, Burlington, Oxford, 2005, xvi + 327 p.
Philippe Binant (propos recueillis par Dominique Maillet), « Kodak. Au cœur de la projection numérique », Actions, n° 29, Division Cinéma et Télévision Kodak, Paris, 2007, p. 12–13.
Filmography
Christopher Kenneally, Side by Side, 2012. IMDb
External links
Side by Side : Q & A with Keanu Reeves, Le Royal Monceau, Paris, April 11–12, 2016.
Film and video technology
Digital media
Cinematography
Filmmaking |
9256 | https://en.wikipedia.org/wiki/Enigma%20machine | Enigma machine | The Enigma machine is a cipher device developed and used in the early- to mid-20th century to protect commercial, diplomatic, and military communication. It was employed extensively by Nazi Germany during World War II, in all branches of the German military. The Enigma machine was considered so secure that it was used to encipher the most top-secret messages.
The Enigma has an electromechanical rotor mechanism that scrambles the 26 letters of the alphabet. In typical use, one person enters text on the Enigma's keyboard and another person writes down which of 26 lights above the keyboard illuminated at each key press. If plain text is entered, the illuminated letters are the encoded ciphertext. Entering ciphertext transforms it back into readable plaintext. The rotor mechanism changes the electrical connections between the keys and the lights with each keypress.
The security of the system depends on machine settings that were generally changed daily, based on secret key lists distributed in advance, and on other settings that were changed for each message. The receiving station would have to know and use the exact settings employed by the transmitting station to successfully decrypt a message.
While Nazi Germany introduced a series of improvements to Enigma over the years, and these hampered decryption efforts, they did not prevent Poland from cracking the machine prior to the war, enabling the Allies to exploit Enigma-enciphered messages as a major source of intelligence. Many commentators say the flow of Ultra communications intelligence from the decryption of Enigma, Lorenz, and other ciphers, shortened the war substantially, and might even have altered its outcome.
History
The Enigma machine was invented by German engineer Arthur Scherbius at the end of World War I. This was unknown until 2003 when a paper by Karl de Leeuw was found that described in detail Scherbius' changes. The German firm Scherbius & Ritter, co-founded by Scherbius, patented ideas for a cipher machine in 1918 and began marketing the finished product under the brand name Enigma in 1923, initially targeted at commercial markets. The name is said to be from the Enigma Variations of English composer Edward Elgar. Early models were used commercially from the early 1920s, and adopted by military and government services of several countries, most notably Nazi Germany before and during World War II.
Several different Enigma models were produced, but the German military models, having a plugboard, were the most complex. Japanese and Italian models were also in use. With its adoption (in slightly modified form) by the German Navy in 1926 and the German Army and Air Force soon after, the name Enigma became widely known in military circles. Pre-war German military planning emphasized fast, mobile forces and tactics, later known as blitzkrieg, which depend on radio communication for command and coordination. Since adversaries would likely intercept radio signals, messages had to be protected with secure encipherment. Compact and easily portable, the Enigma machine filled that need.
Breaking Enigma
Around December 1932 Marian Rejewski, a Polish mathematician and cryptologist at the Polish Cipher Bureau, used the theory of permutations, and flaws in the German military-message encipherment procedures, to break message keys of the plugboard Enigma machine. France's spy Hans-Thilo Schmidt obtained access to German cipher materials that included the daily keys used in September and October 1932. Those keys included the plugboard settings. The French passed the material to the Poles, and Rejewski used some of that material and the message traffic in September and October to solve for the unknown rotor wiring. Consequently the Polish mathematicians were able to build their own Enigma machines, dubbed "Enigma doubles". Rejewski was aided by fellow mathematician-cryptologists Jerzy Różycki and Henryk Zygalski, both of whom had been recruited with Rejewski from Poznań University, which institution had been selected for its students' knowledge of the German language, that area having been held by Germany prior to World War I. The Polish Cipher Bureau developed techniques to defeat the plugboard and find all components of the daily key, which enabled the Cipher Bureau to read German Enigma messages starting from January 1933.
Over time the German cryptographic procedures improved, and the Cipher Bureau developed techniques and designed mechanical devices to continue reading Enigma traffic. As part of that effort, the Poles exploited quirks of the rotors, compiled catalogues, built a cyclometer (invented by Rejewski) to help make a catalogue with 100,000 entries, invented and produced Zygalski sheets, and built the electromechanical cryptologic bomba (invented by Rejewski) to search for rotor settings. In 1938 the Poles had six bomby (plural of bomba), but when that year the Germans added two more rotors, ten times as many bomby would have been needed to read the traffic.
On 26 and 27 July 1939, in Pyry, just south of Warsaw, the Poles initiated French and British military intelligence representatives into the Polish Enigma-decryption techniques and equipment, including Zygalski sheets and the cryptologic bomb, and promised each delegation a Polish-reconstructed Enigma (the devices were soon delivered).
In September 1939, British Military Mission 4, which included Colin Gubbins and Vera Atkins, went to Poland, intending to evacuate cipher-breakers Marian Rejewski, Jerzy Różycki, and Henryk Zygalski from the country. The cryptologists, however, had been evacuated by their own superiors into Romania, at the time a Polish-allied country. On the way, for security reasons, the Polish Cipher Bureau personnel had deliberately destroyed their records and equipment. From Romania they traveled on to France, where they resumed their cryptological work, collaborating by teletype with the British, who began work on decrypting German Enigma messages, using the Polish equipment and techniques.
Gordon Welchman, who became head of Hut 6 at Bletchley Park, has written: "Hut 6 Ultra would never have gotten off the ground if we had not learned from the Poles, in the nick of time, the details both of the German military version of the commercial Enigma machine, and of the operating procedures that were in use." The Polish transfer of theory and technology at Pyry formed the crucial basis for the subsequent World War II British Enigma-decryption effort at Bletchley Park, where Welchman worked.
During the war, British cryptologists decrypted a vast number of messages enciphered on Enigma. The intelligence gleaned from this source, codenamed "Ultra" by the British, was a substantial aid to the Allied war effort.
Though Enigma had some cryptographic weaknesses, in practice it was German procedural flaws, operator mistakes, failure to systematically introduce changes in encipherment procedures, and Allied capture of key tables and hardware that, during the war, enabled Allied cryptologists to succeed.
Design
Like other rotor machines, the Enigma machine is a combination of mechanical and electrical subsystems. The mechanical subsystem consists of a keyboard; a set of rotating disks called rotors arranged adjacently along a spindle; one of various stepping components to turn at least one rotor with each key press, and a series of lamps, one for each letter. These design features are the reason that the Enigma machine was originally referred to as the rotor-based cipher machine during its intellectual inception in 1915.
Electrical pathway
An electrical pathway is a route for current to travel. By manipulating this phenomenon the Enigma machine was able to scramble messages. The mechanical parts act by forming a varying electrical circuit. When a key is pressed, one or more rotors rotate on the spindle. On the sides of the rotors are a series of electrical contacts that, after rotation, line up with contacts on the other rotors or fixed wiring on either end of the spindle. When the rotors are properly aligned, each key on the keyboard is connected to a unique electrical pathway through the series of contacts and internal wiring. Current, typically from a battery, flows through the pressed key, into the newly configured set of circuits and back out again, ultimately lighting one display lamp, which shows the output letter. For example, when encrypting a message starting ANX..., the operator would first press the A key, and the Z lamp might light, so Z would be the first letter of the ciphertext. The operator would next press N, and then X in the same fashion, and so on.
Current flows from the battery (1) through a depressed bi-directional keyboard switch (2) to the plugboard (3). Next, it passes through the (unused in this instance, so shown closed) plug "A" (3) via the entry wheel (4), through the wiring of the three (Wehrmacht Enigma) or four (Kriegsmarine M4 and Abwehr variants) installed rotors (5), and enters the reflector (6). The reflector returns the current, via an entirely different path, back through the rotors (5) and entry wheel (4), proceeding through plug "S" (7) connected with a cable (8) to plug "D", and another bi-directional switch (9) to light the appropriate lamp.
The repeated changes of electrical path through an Enigma scrambler implement a polyalphabetic substitution cipher that provides Enigma's security. The diagram on the right shows how the electrical pathway changes with each key depression, which causes rotation of at least the right-hand rotor. Current passes into the set of rotors, into and back out of the reflector, and out through the rotors again. The greyed-out lines are other possible paths within each rotor; these are hard-wired from one side of each rotor to the other. The letter A encrypts differently with consecutive key presses, first to G, and then to C. This is because the right-hand rotor steps (rotates one position) on each key press, sending the signal on a completely different route. Eventually other rotors step with a key press.
Rotors
The rotors (alternatively wheels or drums, Walzen in German) form the heart of an Enigma machine. Each rotor is a disc approximately in diameter made from Ebonite or Bakelite with 26 brass, spring-loaded, electrical contact pins arranged in a circle on one face, with the other face housing 26 corresponding electrical contacts in the form of circular plates. The pins and contacts represent the alphabet — typically the 26 letters A–Z, as will be assumed for the rest of this description. When the rotors are mounted side by side on the spindle, the pins of one rotor rest against the plate contacts of the neighbouring rotor, forming an electrical connection. Inside the body of the rotor, 26 wires connect each pin on one side to a contact on the other in a complex pattern. Most of the rotors are identified by Roman numerals, and each issued copy of rotor I, for instance, is wired identically to all others. The same is true for the special thin beta and gamma rotors used in the M4 naval variant.
By itself, a rotor performs only a very simple type of encryption, a simple substitution cipher. For example, the pin corresponding to the letter E might be wired to the contact for letter T on the opposite face, and so on. Enigma's security comes from using several rotors in series (usually three or four) and the regular stepping movement of the rotors, thus implementing a polyalphabetic substitution cipher.
Each rotor can be set to one of 26 possible starting positions when placed in an Enigma machine. After insertion, a rotor can be turned to the correct position by hand, using the grooved finger-wheel which protrudes from the internal Enigma cover when closed. In order for the operator to know the rotor's position, each has an alphabet tyre (or letter ring) attached to the outside of the rotor disc, with 26 characters (typically letters); one of these is visible through the window for that slot in the cover, thus indicating the rotational position of the rotor. In early models, the alphabet ring was fixed to the rotor disc. A later improvement was the ability to adjust the alphabet ring relative to the rotor disc. The position of the ring was known as the Ringstellung ("ring setting"), and that setting was a part of the initial setup needed prior to an operating session. In modern terms it was a part of the initialization vector.
Each rotor contains one or more notches that control rotor stepping. In the military variants, the notches are located on the alphabet ring.
The Army and Air Force Enigmas were used with several rotors, initially three. On 15 December 1938, this changed to five, from which three were chosen for a given session. Rotors were marked with Roman numerals to distinguish them: I, II, III, IV and V, all with single notches located at different points on the alphabet ring. This variation was probably intended as a security measure, but ultimately allowed the Polish Clock Method and British Banburismus attacks.
The Naval version of the Wehrmacht Enigma had always been issued with more rotors than the other services: At first six, then seven, and finally eight. The additional rotors were marked VI, VII and VIII, all with different wiring, and had two notches, resulting in more frequent turnover. The four-rotor Naval Enigma (M4) machine accommodated an extra rotor in the same space as the three-rotor version. This was accomplished by replacing the original reflector with a thinner one and by adding a thin fourth rotor. That fourth rotor was one of two types, Beta or Gamma, and never stepped, but could be manually set to any of 26 positions. One of the 26 made the machine perform identically to the three-rotor machine.
Stepping
To avoid merely implementing a simple (solvable) substitution cipher, every key press caused one or more rotors to step by one twenty-sixth of a full rotation, before the electrical connections were made. This changed the substitution alphabet used for encryption, ensuring that the cryptographic substitution was different at each new rotor position, producing a more formidable polyalphabetic substitution cipher. The stepping mechanism varied slightly from model to model. The right-hand rotor stepped once with each keystroke, and other rotors stepped less frequently.
Turnover
The advancement of a rotor other than the left-hand one was called a turnover by the British. This was achieved by a ratchet and pawl mechanism. Each rotor had a ratchet with 26 teeth and every time a key was pressed, the set of spring-loaded pawls moved forward in unison, trying to engage with a ratchet. The alphabet ring of the rotor to the right normally prevented this. As this ring rotated with its rotor, a notch machined into it would eventually align itself with the pawl, allowing it to engage with the ratchet, and advance the rotor on its left. The right-hand pawl, having no rotor and ring to its right, stepped its rotor with every key depression. For a single-notch rotor in the right-hand position, the middle rotor stepped once for every 26 steps of the right-hand rotor. Similarly for rotors two and three. For a two-notch rotor, the rotor to its left would turn over twice for each rotation.
The first five rotors to be introduced (I–V) contained one notch each, while the additional naval rotors VI, VII and VIII each had two notches. The position of the notch on each rotor was determined by the letter ring which could be adjusted in relation to the core containing the interconnections. The points on the rings at which they caused the next wheel to move were as follows.
The design also included a feature known as double-stepping. This occurred when each pawl aligned with both the ratchet of its rotor and the rotating notched ring of the neighbouring rotor. If a pawl engaged with a ratchet through alignment with a notch, as it moved forward it pushed against both the ratchet and the notch, advancing both rotors. In a three-rotor machine, double-stepping affected rotor two only. If, in moving forward, the ratchet of rotor three was engaged, rotor two would move again on the subsequent keystroke, resulting in two consecutive steps. Rotor two also pushes rotor one forward after 26 steps, but since rotor one moves forward with every keystroke anyway, there is no double-stepping. This double-stepping caused the rotors to deviate from odometer-style regular motion.
With three wheels and only single notches in the first and second wheels, the machine had a period of 26×25×26 = 16,900 (not 26×26×26, because of double-stepping). Historically, messages were limited to a few hundred letters, and so there was no chance of repeating any combined rotor position during a single session, denying cryptanalysts valuable clues.
To make room for the Naval fourth rotors, the reflector was made much thinner. The fourth rotor fitted into the space made available. No other changes were made, which eased the changeover. Since there were only three pawls, the fourth rotor never stepped, but could be manually set into one of 26 possible positions.
A device that was designed, but not implemented before the war's end, was the Lückenfüllerwalze (gap-fill wheel) that implemented irregular stepping. It allowed field configuration of notches in all 26 positions. If the number of notches was a relative prime of 26 and the number of notches were different for each wheel, the stepping would be more unpredictable. Like the Umkehrwalze-D it also allowed the internal wiring to be reconfigured.
Entry wheel
The current entry wheel (Eintrittswalze in German), or entry stator, connects the plugboard to the rotor assembly. If the plugboard is not present, the entry wheel instead connects the keyboard and lampboard to the rotor assembly. While the exact wiring used is of comparatively little importance to security, it proved an obstacle to Rejewski's progress during his study of the rotor wirings. The commercial Enigma connects the keys in the order of their sequence on a QWERTZ keyboard: Q→A, W→B, E→C and so on. The military Enigma connects them in straight alphabetical order: A→A, B→B, C→C, and so on. It took inspired guesswork for Rejewski to penetrate the modification.
Reflector
With the exception of models A and B, the last rotor came before a 'reflector' (German: Umkehrwalze, meaning 'reversal rotor'), a patented feature unique to Enigma among the period's various rotor machines. The reflector connected outputs of the last rotor in pairs, redirecting current back through the rotors by a different route. The reflector ensured that Enigma would be self-reciprocal; thus, with two identically configured machines, a message could be encrypted on one and decrypted on the other, without the need for a bulky mechanism to switch between encryption and decryption modes. The reflector allowed a more compact design, but it also gave Enigma the property that no letter ever encrypted to itself. This was a severe cryptological flaw that was subsequently exploited by codebreakers.
In Model 'C', the reflector could be inserted in one of two different positions. In Model 'D', the reflector could be set in 26 possible positions, although it did not move during encryption. In the Abwehr Enigma, the reflector stepped during encryption in a manner similar to the other wheels.
In the German Army and Air Force Enigma, the reflector was fixed and did not rotate; there were four versions. The original version was marked 'A', and was replaced by Umkehrwalze B on 1 November 1937. A third version, Umkehrwalze C was used briefly in 1940, possibly by mistake, and was solved by Hut 6. The fourth version, first observed on 2 January 1944, had a rewireable reflector, called Umkehrwalze D, nick-named Uncle Dick by the British, allowing the Enigma operator to alter the connections as part of the key settings.
Plugboard
The plugboard (Steckerbrett in German) permitted variable wiring that could be reconfigured by the operator (visible on the front panel of Figure 1; some of the patch cords can be seen in the lid). It was introduced on German Army versions in 1928, and was soon adopted by the Reichsmarine (German Navy). The plugboard contributed more cryptographic strength than an extra rotor, as it had 150 trillion possible settings (see below). Enigma without a plugboard (known as unsteckered Enigma) could be solved relatively straightforwardly using hand methods; these techniques were generally defeated by the plugboard, driving Allied cryptanalysts to develop special machines to solve it.
A cable placed onto the plugboard connected letters in pairs; for example, E and Q might be a steckered pair. The effect was to swap those letters before and after the main rotor scrambling unit. For example, when an operator pressed E, the signal was diverted to Q before entering the rotors. Up to 13 steckered pairs might be used at one time, although only 10 were normally used.
Current flowed from the keyboard through the plugboard, and proceeded to the entry-rotor or Eintrittswalze. Each letter on the plugboard had two jacks. Inserting a plug disconnected the upper jack (from the keyboard) and the lower jack (to the entry-rotor) of that letter. The plug at the other end of the crosswired cable was inserted into another letter's jacks, thus switching the connections of the two letters.
Accessories
Other features made various Enigma machines more secure or more convenient.
Schreibmax
Some M4 Enigmas used the Schreibmax, a small printer that could print the 26 letters on a narrow paper ribbon. This eliminated the need for a second operator to read the lamps and transcribe the letters. The Schreibmax was placed on top of the Enigma machine and was connected to the lamp panel. To install the printer, the lamp cover and light bulbs had to be removed. It improved both convenience and operational security; the printer could be installed remotely such that the signal officer operating the machine no longer had to see the decrypted plaintext.
Fernlesegerät
Another accessory was the remote lamp panel Fernlesegerät. For machines equipped with the extra panel, the wooden case of the Enigma was wider and could store the extra panel. A lamp panel version could be connected afterwards, but that required, as with the Schreibmax, that the lamp panel and light bulbs be removed. The remote panel made it possible for a person to read the decrypted plaintext without the operator seeing it.
Uhr
In 1944, the Luftwaffe introduced a plugboard switch, called the Uhr (clock), a small box containing a switch with 40 positions. It replaced the standard plugs. After connecting the plugs, as determined in the daily key sheet, the operator turned the switch into one of the 40 positions, each producing a different combination of plug wiring. Most of these plug connections were, unlike the default plugs, not pair-wise. In one switch position, the Uhr did not swap letters, but simply emulated the 13 stecker wires with plugs.
Mathematical analysis
The Enigma transformation for each letter can be specified mathematically as a product of permutations. Assuming a three-rotor German Army/Air Force Enigma, let denote the plugboard transformation, denote that of the reflector, and , , denote those of the left, middle and right rotors respectively. Then the encryption can be expressed as
After each key press, the rotors turn, changing the transformation. For example, if the right-hand rotor is rotated positions, the transformation becomes
where is the cyclic permutation mapping A to B, B to C, and so forth. Similarly, the middle and left-hand rotors can be represented as and rotations of and . The encryption transformation can then be described as
Combining three rotors from a set of five, each of the 3 rotor settings with 26 positions, and the plugboard with ten pairs of letters connected, the military Enigma has 158,962,555,217,826,360,000 different settings (nearly 159 quintillion or about 67 bits).
Operation
Basic operation
A German Enigma operator would be given a plaintext message to encrypt. After setting up his machine, he would type the message on the Enigma keyboard. For each letter pressed, one lamp lit indicating a different letter according to a pseudo-random substitution determined by the electrical pathways inside the machine. The letter indicated by the lamp would be recorded, typically by a second operator, as the cyphertext letter. The action of pressing a key also moved one or more rotors so that the next key press used a different electrical pathway, and thus a different substitution would occur even if the same plaintext letter were entered again. For each key press there was rotation of at least the right hand rotor and less often the other two, resulting in a different substitution alphabet being used for every letter in the message. This process continued until the message was completed. The cyphertext recorded by the second operator would then be transmitted, usually by radio in Morse code, to an operator of another Enigma machine. This operator would type in the cyphertext and — as long as all the settings of the deciphering machine were identical to those of the enciphering machine — for every key press the reverse substitution would occur and the plaintext message would emerge.
Details
In use, the Enigma required a list of daily key settings and auxiliary documents. In German military practice, communications were divided into separate networks, each using different settings. These communication nets were termed keys at Bletchley Park, and were assigned code names, such as Red, Chaffinch, and Shark. Each unit operating in a network was given the same settings list for its Enigma, valid for a period of time. The procedures for German Naval Enigma were more elaborate and more secure than those in other services and employed auxiliary codebooks. Navy codebooks were printed in red, water-soluble ink on pink paper so that they could easily be destroyed if they were endangered or if the vessel was sunk.
An Enigma machine's setting (its cryptographic key in modern terms; Schlüssel in German) specified each operator-adjustable aspect of the machine:
Wheel order (Walzenlage) – the choice of rotors and the order in which they are fitted.
Ring settings (Ringstellung) – the position of each alphabet ring relative to its rotor wiring.
Plug connections (Steckerverbindungen) – the pairs of letters in the plugboard that are connected together.
In very late versions, the wiring of the reconfigurable reflector.
Starting position of the rotors (Grundstellung) – chosen by the operator, should be different for each message.
For a message to be correctly encrypted and decrypted, both sender and receiver had to configure their Enigma in the same way; rotor selection and order, ring positions, plugboard connections and starting rotor positions must be identical. Except for the starting positions, these settings were established beforehand, distributed in key lists and changed daily. For example, the settings for the 18th day of the month in the German Luftwaffe Enigma key list number 649 (see image) were as follows:
Wheel order: IV, II, V
Ring settings: 15, 23, 26
Plugboard connections: EJ OY IV AQ KW FX MT PS LU BD
Reconfigurable reflector wiring: IU AS DV GL FT OX EZ CH MR KN BQ PW
Indicator groups: lsa zbw vcj rxn
Enigma was designed to be secure even if the rotor wiring was known to an opponent, although in practice considerable effort protected the wiring configuration. If the wiring is secret, the total number of possible configurations has been calculated to be around (approximately 380 bits); with known wiring and other operational constraints, this is reduced to around (76 bits). Because of the large number of possibilities, users of Enigma were confident of its security; it was not then feasible for an adversary to even begin to try a brute-force attack.
Indicator
Most of the key was kept constant for a set time period, typically a day. A different initial rotor position was used for each message, a concept similar to an initialisation vector in modern cryptography. The reason is that encrypting many messages with identical or near-identical settings (termed in cryptanalysis as being in depth), would enable an attack using a statistical procedure such as Friedman's Index of coincidence. The starting position for the rotors was transmitted just before the ciphertext, usually after having been enciphered. The exact method used was termed the indicator procedure. Design weakness and operator sloppiness in these indicator procedures were two of the main weaknesses that made cracking Enigma possible.
One of the earliest indicator procedures for the Enigma was cryptographically flawed and allowed Polish cryptanalysts to make the initial breaks into the plugboard Enigma. The procedure had the operator set his machine in accordance with the secret settings that all operators on the net shared. The settings included an initial position for the rotors (the Grundstellung), say, AOH. The operator turned his rotors until AOH was visible through the rotor windows. At that point, the operator chose his own arbitrary starting position for the message he would send. An operator might select EIN, and that became the message setting for that encryption session. The operator then typed EIN into the machine twice, this producing the encrypted indicator, for example XHTLOA. This was then transmitted, at which point the operator would turn the rotors to his message settings, EIN in this example, and then type the plaintext of the message.
At the receiving end, the operator set the machine to the initial settings (AOH) and typed in the first six letters of the message (XHTLOA). In this example, EINEIN emerged on the lamps, so the operator would learn the message setting that the sender used to encrypt this message. The receiving operator would set his rotors to EIN, type in the rest of the ciphertext, and get the deciphered message.
This indicator scheme had two weaknesses. First, the use of a global initial position (Grundstellung) meant all message keys used the same polyalphabetic substitution. In later indicator procedures, the operator selected his initial position for encrypting the indicator and sent that initial position in the clear. The second problem was the repetition of the indicator, which was a serious security flaw. The message setting was encoded twice, resulting in a relation between first and fourth, second and fifth, and third and sixth character. These security flaws enabled the Polish Cipher Bureau to break into the pre-war Enigma system as early as 1932. The early indicator procedure was subsequently described by German cryptanalysts as the "faulty indicator technique".
During World War II, codebooks were only used each day to set up the rotors, their ring settings and the plugboard. For each message, the operator selected a random start position, let's say WZA, and a random message key, perhaps SXT. He moved the rotors to the WZA start position and encoded the message key SXT. Assume the result was UHL. He then set up the message key, SXT, as the start position and encrypted the message. Next, he transmitted the start position, WZA, the encoded message key, UHL, and then the ciphertext. The receiver set up the start position according to the first trigram, WZA, and decoded the second trigram, UHL, to obtain the SXT message setting. Next, he used this SXT message setting as the start position to decrypt the message. This way, each ground setting was different and the new procedure avoided the security flaw of double encoded message settings.
This procedure was used by Wehrmacht and Luftwaffe only. The Kriegsmarine procedures on sending messages with the Enigma were far more complex and elaborate. Prior to encryption the message was encoded using the Kurzsignalheft code book. The Kurzsignalheft contained tables to convert sentences into four-letter groups. A great many choices were included, for example, logistic matters such as refuelling and rendezvous with supply ships, positions and grid lists, harbour names, countries, weapons, weather conditions, enemy positions and ships, date and time tables. Another codebook contained the Kenngruppen and Spruchschlüssel: the key identification and message key.
Additional details
The Army Enigma machine used only the 26 alphabet characters. Punctuation was replaced with rare character combinations. A space was omitted or replaced with an X. The X was generally used as full-stop.
Some punctuation marks were different in other parts of the armed forces. The Wehrmacht replaced a comma with ZZ and the question mark with FRAGE or FRAQ.
The Kriegsmarine replaced the comma with Y and the question mark with UD. The combination CH, as in "Acht" (eight) or "Richtung" (direction), was replaced with Q (AQT, RIQTUNG). Two, three and four zeros were replaced with CENTA, MILLE and MYRIA.
The Wehrmacht and the Luftwaffe transmitted messages in groups of five characters.
The Kriegsmarine, using the four rotor Enigma, had four-character groups. Frequently used names or words were varied as much as possible. Words like Minensuchboot (minesweeper) could be written as MINENSUCHBOOT, MINBOOT, MMMBOOT or MMM354. To make cryptanalysis harder, messages were limited to 250 characters. Longer messages were divided into several parts, each using a different message key.
Example encoding process
The character substitutions by the Enigma machine as a whole can be expressed as a string of letters with each position occupied by the character that will replace the character at the corresponding position in the alphabet. For example, a given machine configuration that encoded A to L, B to U, C to S, ..., and Z to J could be represented compactly as
LUSHQOXDMZNAIKFREPCYBWVGTJ
and the encoding of a particular character by that configuration could be represented by highlighting the encoded character as in
D > LUS(H)QOXDMZNAIKFREPCYBWVGTJ
Since the operation of an Enigma machine encoding a message is a series of such configurations, each associated with a single character being encoded, a sequence of such representations can be used to represent the operation of the machine as it encodes a message. For example, the process of encoding the first sentence of the main body of the famous "Dönitz message" to
RBBF PMHP HGCZ XTDY GAHG UFXG EWKB LKGJ
can be represented as
0001 F > KGWNT(R)BLQPAHYDVJIFXEZOCSMU CDTK 25 15 16 26
0002 O > UORYTQSLWXZHNM(B)VFCGEAPIJDK CDTL 25 15 16 01
0003 L > HLNRSKJAMGF(B)ICUQPDEYOZXWTV CDTM 25 15 16 02
0004 G > KPTXIG(F)MESAUHYQBOVJCLRZDNW CDUN 25 15 17 03
0005 E > XDYB(P)WOSMUZRIQGENLHVJTFACK CDUO 25 15 17 04
0006 N > DLIAJUOVCEXBN(M)GQPWZYFHRKTS CDUP 25 15 17 05
0007 D > LUS(H)QOXDMZNAIKFREPCYBWVGTJ CDUQ 25 15 17 06
0008 E > JKGO(P)TCIHABRNMDEYLZFXWVUQS CDUR 25 15 17 07
0009 S > GCBUZRASYXVMLPQNOF(H)WDKTJIE CDUS 25 15 17 08
0010 I > XPJUOWIY(G)CVRTQEBNLZMDKFAHS CDUT 25 15 17 09
0011 S > DISAUYOMBPNTHKGJRQ(C)LEZXWFV CDUU 25 15 17 10
0012 T > FJLVQAKXNBGCPIRMEOY(Z)WDUHST CDUV 25 15 17 11
0013 S > KTJUQONPZCAMLGFHEW(X)BDYRSVI CDUW 25 15 17 12
0014 O > ZQXUVGFNWRLKPH(T)MBJYODEICSA CDUX 25 15 17 13
0015 F > XJWFR(D)ZSQBLKTVPOIEHMYNCAUG CDUY 25 15 17 14
0016 O > FSKTJARXPECNUL(Y)IZGBDMWVHOQ CDUZ 25 15 17 15
0017 R > CEAKBMRYUVDNFLTXW(G)ZOIJQPHS CDVA 25 15 18 16
0018 T > TLJRVQHGUCXBZYSWFDO(A)IEPKNM CDVB 25 15 18 17
0019 B > Y(H)LPGTEBKWICSVUDRQMFONJZAX CDVC 25 15 18 18
0020 E > KRUL(G)JEWNFADVIPOYBXZCMHSQT CDVD 25 15 18 19
0021 K > RCBPQMVZXY(U)OFSLDEANWKGTIJH CDVE 25 15 18 20
0022 A > (F)CBJQAWTVDYNXLUSEZPHOIGMKR CDVF 25 15 18 21
0023 N > VFTQSBPORUZWY(X)HGDIECJALNMK CDVG 25 15 18 22
0024 N > JSRHFENDUAZYQ(G)XTMCBPIWVOLK CDVH 25 15 18 23
0025 T > RCBUTXVZJINQPKWMLAY(E)DGOFSH CDVI 25 15 18 24
0026 Z > URFXNCMYLVPIGESKTBOQAJZDH(W) CDVJ 25 15 18 25
0027 U > JIOZFEWMBAUSHPCNRQLV(K)TGYXD CDVK 25 15 18 26
0028 G > ZGVRKO(B)XLNEIWJFUSDQYPCMHTA CDVL 25 15 18 01
0029 E > RMJV(L)YQZKCIEBONUGAWXPDSTFH CDVM 25 15 18 02
0030 B > G(K)QRFEANZPBMLHVJCDUXSOYTWI CDWN 25 15 19 03
0031 E > YMZT(G)VEKQOHPBSJLIUNDRFXWAC CDWO 25 15 19 04
0032 N > PDSBTIUQFNOVW(J)KAHZCEGLMYXR CDWP 25 15 19 05
where the letters following each mapping are the letters that appear at the windows at that stage (the only state changes visible to the operator) and the numbers show the underlying physical position of each rotor.
The character mappings for a given configuration of the machine are in turn the result of a series of such mappings applied by each pass through a component of the machine: the encoding of a character resulting from the application of a given component's mapping serves as the input to the mapping of the subsequent component. For example, the 4th step in the encoding above can be expanded to show each of these stages using the same representation of mappings and highlighting for the encoded character:
G > ABCDEF(G)HIJKLMNOPQRSTUVWXYZ
P EFMQAB(G)UINKXCJORDPZTHWVLYS AE.BF.CM.DQ.HU.JN.LX.PR.SZ.VW
1 OFRJVM(A)ZHQNBXPYKCULGSWETDI N 03 VIII
2 (N)UKCHVSMDGTZQFYEWPIALOXRJB U 17 VI
3 XJMIYVCARQOWH(L)NDSUFKGBEPZT D 15 V
4 QUNGALXEPKZ(Y)RDSOFTVCMBIHWJ C 25 β
R RDOBJNTKVEHMLFCWZAXGYIPS(U)Q c
4 EVTNHQDXWZJFUCPIAMOR(B)SYGLK β
3 H(V)GPWSUMDBTNCOKXJIQZRFLAEY V
2 TZDIPNJESYCUHAVRMXGKB(F)QWOL VI
1 GLQYW(B)TIZDPSFKANJCUXREVMOH VIII
P E(F)MQABGUINKXCJORDPZTHWVLYS AE.BF.CM.DQ.HU.JN.LX.PR.SZ.VW
F < KPTXIG(F)MESAUHYQBOVJCLRZDNW
Here the encoding begins trivially with the first "mapping" representing the keyboard (which has no effect), followed by the plugboard, configured as AE.BF.CM.DQ.HU.JN.LX.PR.SZ.VW which has no effect on 'G', followed by the VIII rotor in the 03 position, which maps G to A, then the VI rotor in the 17 position, which maps A to N, ..., and finally the plugboard again, which maps B to F, producing the overall mapping indicated at the final step: G to F.
Note that this model has 4 rotors (lines 1 through 4) and that the reflector (line R) also permutes (garbles) letters.
Models
The Enigma family included multiple designs. The earliest were commercial models dating from the early 1920s. Starting in the mid-1920s, the German military began to use Enigma, making a number of security-related changes. Various nations either adopted or adapted the design for their own cipher machines.
An estimated 40,000 Enigma machines were constructed. After the end of World War II, the Allies sold captured Enigma machines, still widely considered secure, to developing countries.
Commercial Enigma
On 23 February 1918, Arthur Scherbius applied for a patent for a ciphering machine that used rotors. Scherbius and E. Richard Ritter founded the firm of Scherbius & Ritter. They approached the German Navy and Foreign Office with their design, but neither agency was interested. Scherbius & Ritter then assigned the patent rights to Gewerkschaft Securitas, who founded the Chiffriermaschinen Aktien-Gesellschaft (Cipher Machines Stock Corporation) on 9 July 1923; Scherbius and Ritter were on the board of directors.
Enigma A (1923)
Chiffriermaschinen AG began advertising a rotor machine, Enigma model A, which was exhibited at the Congress of the International Postal Union in 1924. The machine was heavy and bulky, incorporating a typewriter. It measured 65×45×38 cm and weighed about .
Enigma B (1924)
In 1924 Enigma model B was introduced, and was of a similar construction. While bearing the Enigma name, both models A and B were quite unlike later versions: They differed in physical size and shape, but also cryptographically, in that they lacked the reflector. This model of Enigma machine was referred to as the Glowlamp Enigma or Glühlampenmaschine since it produced its output on a lamp panel rather than paper. This method of output was much more reliable and cost effective. Hence this machine was 1/8th the price of its predecessor.
Enigma C (1926)
The reflector, suggested by Scherbius' colleague Willi Korn, was introduced in Enigma C (1926).
Model C was the third model of the so-called ″glowlamp Enigmas″ (after A and B) and it again lacked a typewriter.
Enigma D (1927)
The Enigma C quickly gave way to Enigma D (1927). This version was widely used, with shipments to Sweden, the Netherlands, United Kingdom, Japan, Italy, Spain, United States and Poland. In 1927 Hugh Foss at the British Government Code and Cypher School was able to show that commercial Enigma machines could be broken, provided suitable cribs were available. Soon, the Enigma D would pioneer the use of a standard keyboard layout to be used in German computing. This "QWERTZ" layout is very similar to the American QWERTY keyboard format used in many languages.
"Navy Cipher D"
Other countries used Enigma machines. The Italian Navy adopted the commercial Enigma as "Navy Cipher D". The Spanish also used commercial Enigma machines during their Civil War. British codebreakers succeeded in breaking these machines, which lacked a plugboard. Enigma machines were also used by diplomatic services.
Enigma H (1929)
There was also a large, eight-rotor printing model, the Enigma H, called Enigma II by the Reichswehr. In 1933 the Polish Cipher Bureau detected that it was in use for high-level military communication, but it was soon withdrawn, as it was unreliable and jammed frequently.
Enigma K
The Swiss used a version of Enigma called Model K or Swiss K for military and diplomatic use, which was very similar to commercial Enigma D. The machine's code was cracked by Poland, France, the United Kingdom and the United States; the latter code-named it INDIGO. An Enigma T model, code-named Tirpitz, was used by Japan.
Typex
Once the British figured out Enigma's principle of operation, they fixed the problem with it and created their own, the Typex, which the Germans believed to be unsolvable.
Military Enigma
The various services of the Wehrmacht used various Enigma versions, and replaced them frequently, sometimes with ones adapted from other services. Enigma seldom carried high-level strategic messages, which when not urgent went by courier, and when urgent went by other cryptographic systems including the Geheimschreiber.
Funkschlüssel C
The Reichsmarine was the first military branch to adopt Enigma. This version, named Funkschlüssel C ("Radio cipher C"), had been put into production by 1925 and was introduced into service in 1926.
The keyboard and lampboard contained 29 letters — A-Z, Ä, Ö and Ü — that were arranged alphabetically, as opposed to the QWERTZUI ordering. The rotors had 28 contacts, with the letter X wired to bypass the rotors unencrypted. Three rotors were chosen from a set of five and the reflector could be inserted in one of four different positions, denoted α, β, γ and δ. The machine was revised slightly in July 1933.
Enigma G (1928–1930)
By 15 July 1928, the German Army (Reichswehr) had introduced their own exclusive version of the Enigma machine, the Enigma G.
The Abwehr used the Enigma G (the Abwehr Enigma). This Enigma variant was a four-wheel unsteckered machine with multiple notches on the rotors. This model was equipped with a counter that incremented upon each key press, and so is also known as the "counter machine" or the Zählwerk Enigma.
Wehrmacht Enigma I (1930–1938)
Enigma machine G was modified to the Enigma I by June 1930. Enigma I is also known as the Wehrmacht, or "Services" Enigma, and was used extensively by German military services and other government organisations (such as the railways) before and during World War II.
The major difference between Enigma I (German Army version from 1930), and commercial Enigma models was the addition of a plugboard to swap pairs of letters, greatly increasing cryptographic strength.
Other differences included the use of a fixed reflector and the relocation of the stepping notches from the rotor body to the movable letter rings. The machine measured and weighed around .
In August 1935, the Air Force introduced the Wehrmacht Enigma for their communications.
M3 (1934)
By 1930, the Reichswehr had suggested that the Navy adopt their machine, citing the benefits of increased security (with the plugboard) and easier interservice communications. The Reichsmarine eventually agreed and in 1934 brought into service the Navy version of the Army Enigma, designated Funkschlüssel ' or M3. While the Army used only three rotors at that time, the Navy specified a choice of three from a possible five.
Two extra rotors (1938)
In December 1938, the Army issued two extra rotors so that the three rotors were chosen from a set of five. In 1938, the Navy added two more rotors, and then another in 1939 to allow a choice of three rotors from a set of eight.
M4 (1942)
A four-rotor Enigma was introduced by the Navy for U-boat traffic on 1 February 1942, called M4 (the network was known as Triton, or Shark to the Allies). The extra rotor was fitted in the same space by splitting the reflector into a combination of a thin reflector and a thin fourth rotor.
Surviving machines
The effort to break the Enigma was not disclosed until the 1970s. Since then, interest in the Enigma machine has grown. Enigmas are on public display in museums around the world, and several are in the hands of private collectors and computer history enthusiasts.
The Deutsches Museum in Munich has both the three- and four-rotor German military variants, as well as several civilian versions. Enigma machines are exhibited at the National Codes Centre in Bletchley Park, the Government Communications Headquarters, the Science Museum in London, Discovery Park of America in Tennessee, the Polish Army Museum in Warsaw, the Swedish Army Museum (Armémuseum) in Stockholm, the Military Museum of A Coruña in Spain, the Nordland Red Cross War Memorial Museum in Narvik, Norway, The Artillery, Engineers and Signals Museum in Hämeenlinna, Finland the Technical University of Denmark in Lyngby, Denmark, in Skanderborg Bunkerne at Skanderborg, Denmark, and at the Australian War Memorial and in the foyer of the Australian Signals Directorate, both in Canberra, Australia. The Jozef Pilsudski Institute in London exhibited a rare Polish Enigma double assembled in France in 1940.
In 2020, thanks to the support of the Ministry of Culture and National Heritage, it became the property of the Polish History Museum.
In the United States, Enigma machines can be seen at the Computer History Museum in Mountain View, California, and at the National Security Agency's National Cryptologic Museum in Fort Meade, Maryland, where visitors can try their hand at enciphering and deciphering messages. Two machines that were acquired after the capture of during World War II are on display alongside the submarine at the Museum of Science and Industry in Chicago, Illinois. A three-rotor Enigma is on display at Discovery Park of America in Union City, Tennessee. A four-rotor device is on display in the ANZUS Corridor of the Pentagon on the second floor, A ring, between corridors 8 and 9. This machine is on loan from Australia. The United States Air Force Academy in Colorado Springs has a machine on display in the Computer Science Department. There is also a machine located at The National WWII Museum in New Orleans. The International Museum of World War II near Boston has seven Enigma machines on display, including a U-Boat four-rotor model, one of three surviving examples of an Enigma machine with a printer, one of fewer than ten surviving ten-rotor code machines, an example blown up by a retreating German Army unit, and two three-rotor Enigmas that visitors can operate to encode and decode messages. Computer Museum of America in Roswell, Georgia has a three-rotor model with two additional rotors. The machine is fully restored and CMoA has the original paperwork for the purchase on 7 March 1936 by the German Army. The National Museum of Computing also contains surviving Enigma machines in Bletchley, England.
In Canada, a Swiss Army issue Enigma-K, is in Calgary, Alberta. It is on permanent display at the Naval Museum of Alberta inside the Military Museums of Calgary. A four-rotor Enigma machine is on display at the Military Communications and Electronics Museum at Canadian Forces Base (CFB) Kingston in Kingston, Ontario.
Occasionally, Enigma machines are sold at auction; prices have in recent years ranged from US$40,000 to US$547,500 in 2017. Replicas are available in various forms, including an exact reconstructed copy of the Naval M4 model, an Enigma implemented in electronics (Enigma-E), various simulators and paper-and-scissors analogues.
A rare Abwehr Enigma machine, designated G312, was stolen from the Bletchley Park museum on 1 April 2000. In September, a man identifying himself as "The Master" sent a note demanding £25,000 and threatening to destroy the machine if the ransom was not paid. In early October 2000, Bletchley Park officials announced that they would pay the ransom, but the stated deadline passed with no word from the blackmailer. Shortly afterward, the machine was sent anonymously to BBC journalist Jeremy Paxman, missing three rotors.
In November 2000, an antiques dealer named Dennis Yates was arrested after telephoning The Sunday Times to arrange the return of the missing parts. The Enigma machine was returned to Bletchley Park after the incident. In October 2001, Yates was sentenced to ten months in prison and served three months.
In October 2008, the Spanish daily newspaper El País reported that 28 Enigma machines had been discovered by chance in an attic of Army headquarters in Madrid. These four-rotor commercial machines had helped Franco's Nationalists win the Spanish Civil War, because, though the British cryptologist Alfred Dilwyn Knox in 1937 broke the cipher generated by Franco's Enigma machines, this was not disclosed to the Republicans, who failed to break the cipher. The Nationalist government continued using its 50 Enigmas into the 1950s. Some machines have gone on display in Spanish military museums, including one at the National Museum of Science and Technology (MUNCYT) in La Coruña and one at the Spanish Army Museum. Two have been given to Britain's GCHQ.
The Bulgarian military used Enigma machines with a Cyrillic keyboard; one is on display in the National Museum of Military History in Sofia.
On 3 December 2020, German divers working on behalf of the World Wide Fund for Nature discovered a destroyed Enigma machine in Flensburg Firth (part of the Baltic Sea) which is believed to be from a scuttled U-Boat. This Enigma machine will be restored by and be the property of the Archaeology Museum of Schleswig Holstein.
Derivatives
The Enigma was influential in the field of cipher machine design, spinning off other rotor machines. The British Typex was originally derived from the Enigma patents; Typex even includes features from the patent descriptions that were omitted from the actual Enigma machine. The British paid no royalties for the use of the patents, to protect secrecy. The Typex implementation is not the same as that found in German or other Axis versions.
A Japanese Enigma clone was codenamed GREEN by American cryptographers. Little used, it contained four rotors mounted vertically. In the United States, cryptologist William Friedman designed the M-325, a machine logically similar, although not in construction.
A unique rotor machine called Cryptograph was constructed in 2002 by Netherlands-based Tatjana van Vark. This device makes use of 40-point rotors, allowing letters, numbers and some punctuation to be used; each rotor contains 509 parts.
Machines like the SIGABA, NEMA, Typex and so forth, are deliberately not considered to be Enigma derivatives as their internal ciphering functions are not mathematically identical to the Enigma transform.
Several software implementations exist, but not all exactly match Enigma behaviour. Many Java applet Enigmas only accept single letter entry, complicating use even if the applet is Enigma compliant. Technically, Enigma@home is the largest scale deployment of a software Enigma, but the decoding software does not implement encipherment making it a derivative (as all original machines could cipher and decipher).
A user-friendly three-rotor simulator, where users can select rotors, use the plugboard and define new settings for the rotors and reflectors is available. The output appears in separate windows which can be independently made "invisible" to hide decryption. Another includes an "autotyping" function which takes plaintext from a clipboard and converts it to cyphertext (or vice versa) at one of four speeds. The "very fast" option produces 26 characters in less than one second.
Simulators
In popular culture
Literature
Hugh Whitemore's play, Breaking the Code (1986), focuses on the life and death of Alan Turing, who was the central force in continuing to solve the Enigma code in the United Kingdom, during World War II. Turing was played by Derek Jacobi, who also played Turing in a 1996 television adaptation of the play.
Robert Harris' novel Enigma (1995) is set against the backdrop of World War II Bletchley Park and cryptologists working to read Naval Enigma in Hut 8.
Neal Stephenson's novel Cryptonomicon (1999) prominently features the Enigma machine and efforts to break it, and portrays the German U-boat command under Karl Dönitz using it in apparently deliberate ignorance of its penetration.
Enigma is featured in The Code Book, a survey of the history of cryptography written by Simon Singh and published in 1999.
The Enigma machine is used as a key plot element in Century Rain by Alastair Reynolds, set in an alternate Earth where technological research has stagnated and the Enigma is the highest level of encryption available both to civilians and military.
Elizabeth Wein's The Enigma Game (2020) is a young adult historical fiction novel about three young adults (a war orphan, a volunteer driver with the Royal Air Force, and a flight leader for the 648 Squadron) who find and use an Enigma machine (hidden by a German spy) to decode overheard transmissions and help the British war effort during WWII
Films
Sekret Enigmy (1979; translation: The Enigma Secret), is a Polish film dealing with Polish aspects of the subject.
The plot of the film U-571 (released in 2000) revolves around an attempt by American, rather than British, forces to seize an Enigma machine from a German U-boat.
The 2001 war comedy film All the Queen's Men featured a fictitious British plot to capture an Enigma machine by infiltrating the Enigma factory with men disguised as women.
Harris' book, with substantial changes in plot, was adapted as the film Enigma (2001), directed by Michael Apted and starring Kate Winslet and Dougray Scott. The film was criticised for historical inaccuracies, including neglect of the role of Poland's Biuro Szyfrów. The film, like the book, makes a Pole the villain, who seeks to betray the secret of Enigma decryption.
The film The Imitation Game (2014) tells the story of Alan Turing and his attempts to crack the Enigma machine cipher during World War II.
Television
In the British television series The Bletchley Circle, the Typex was used by the protagonists during the war, and in Season 2, Episode 4, they visit Bletchley Park to seek one out, in order to crack the code of the black market procurer and smuggler Marta, who used the Typex to encode her ledger. The Circle, forced to settle for using an Enigma, instead, successfully cracks the code.
In Elementary season 5, episode 23 ("Scrambled"), a drug smuggling gang uses a four-rotor Enigma machine as part of their effort to encrypt their communications.
In Bones season 8, episode 12 ("The Corpse in the Canopy"), Dr. Jack Hodgins uses an Enigma machine to send information to Seeley Booth at the FBI in order to prevent Christopher Pelant, a master hacker, from spying on their communications.
See also
Beaumanor Hall, a stately home used during the Second World War for military intelligence
Alastair Denniston
Erich Fellgiebel
Gisbert Hasenjaeger — responsible for Enigma security
Erhard Maertens — investigated Enigma security
Fritz Thiele
United States Naval Computing Machine Laboratory
Arlington Hall
References
Bibliography
Comer, Tony (2021), "Poland's Decisive Role in Cracking Enigma and Transforming the UK's SIGINT Operations", RUSI Commentary, 27 January 2021. https://rusi.org/commentary/poland-decisive-role-cracking-enigma-and-transforming-uk-sigint-operations
Further reading
Heath, Nick, Hacking the Nazis: The secret story of the women who broke Hitler's codes TechRepublic, 27 March 2015
Marks, Philip. "Umkehrwalze D: Enigma's Rewirable Reflector — Part I", Cryptologia 25(2), April 2001, pp. 101–141.
Marks, Philip. "Umkehrwalze D: Enigma's Rewirable Reflector — Part II", Cryptologia 25(3), July 2001, pp. 177–212.
Marks, Philip. "Umkehrwalze D: Enigma's Rewirable Reflector — Part III", Cryptologia 25(4), October 2001, pp. 296–310.
Perera, Tom. The Story of the ENIGMA: History, Technology and Deciphering, 2nd Edition, CD-ROM, 2004, Artifax Books, sample pages
Rebecca Ratcliffe: Searching for Security. The German Investigations into Enigma's security. In: Intelligence and National Security 14 (1999) Issue 1 (Special Issue) S. 146–167.
Rejewski, Marian. How Polish Mathematicians Deciphered the Enigma", Annals of the History of Computing 3, 1981. This article is regarded by Andrew Hodges, Alan Turing's biographer, as "the definitive account" (see Hodges' Alan Turing: The Enigma, Walker and Company, 2000 paperback edition, p. 548, footnote 4.5).
Ulbricht, Heinz. Enigma Uhr, Cryptologia'', 23(3), April 1999, pp. 194–205.
Untold Story of Enigma Code-Breaker — The Ministry of Defence (U.K.)
External links
Gordon Corera, Poland's overlooked Enigma codebreakers, BBC News Magazine, 4 July 2014
Long-running list of places with Enigma machines on display
Bletchley Park National Code Centre Home of the British codebreakers during the Second World War
Enigma machines on the Crypto Museum Web site
Pictures of a four-rotor naval enigma, including Flash (SWF) views of the machine
Enigma Pictures and Demonstration by NSA Employee at RSA
Kenngruppenheft
Process of building an Enigma M4 replica
Breaking German Navy Ciphers
An online Enigma Machine simulator
Enigma simulation
Universal Enigma simulator
Cryptii — Online modular playground, including 13 Enigma machine variations
Products introduced in 1918
Broken stream ciphers
Cryptographic hardware
Rotor machines
Signals intelligence of World War II
World War II military equipment of Germany
Encryption devices
Military communications of Germany
Military equipment introduced in the 1920s |
9569 | https://en.wikipedia.org/wiki/Endomorphism | Endomorphism | In mathematics, an endomorphism is a morphism from a mathematical object to itself. An endomorphism that is also an isomorphism is an automorphism. For example, an endomorphism of a vector space is a linear map , and an endomorphism of a group is a group homomorphism . In general, we can talk about endomorphisms in any category. In the category of sets, endomorphisms are functions from a set S to itself.
In any category, the composition of any two endomorphisms of is again an endomorphism of . It follows that the set of all endomorphisms of forms a monoid, the full transformation monoid, and denoted (or to emphasize the category ).
Automorphisms
An invertible endomorphism of is called an automorphism. The set of all automorphisms is a subset of with a group structure, called the automorphism group of and denoted . In the following diagram, the arrows denote implication:
Endomorphism rings
Any two endomorphisms of an abelian group, , can be added together by the rule . Under this addition, and with multiplication being defined as function composition, the endomorphisms of an abelian group form a ring (the endomorphism ring). For example, the set of endomorphisms of is the ring of all matrices with integer entries. The endomorphisms of a vector space or module also form a ring, as do the endomorphisms of any object in a preadditive category. The endomorphisms of a nonabelian group generate an algebraic structure known as a near-ring. Every ring with one is the endomorphism ring of its regular module, and so is a subring of an endomorphism ring of an abelian group; however there are rings that are not the endomorphism ring of any abelian group.
Operator theory
In any concrete category, especially for vector spaces, endomorphisms are maps from a set into itself, and may be interpreted as unary operators on that set, acting on the elements, and allowing to define the notion of orbits of elements, etc.
Depending on the additional structure defined for the category at hand (topology, metric, ...), such operators can have properties like continuity, boundedness, and so on. More details should be found in the article about operator theory.
Endofunctions
An endofunction is a function whose domain is equal to its codomain. A homomorphic endofunction is an endomorphism.
Let be an arbitrary set. Among endofunctions on one finds permutations of and constant functions associating to every in the same element in . Every permutation of has the codomain equal to its domain and is bijective and invertible. If has more than one element, a constant function on has an image that is a proper subset of its codomain, and thus is not bijective (and hence not invertible). The function associating to each natural number the floor of has its image equal to its codomain and is not invertible.
Finite endofunctions are equivalent to directed pseudoforests. For sets of size there are endofunctions on the set.
Particular examples of bijective endofunctions are the involutions; i.e., the functions coinciding with their inverses.
See also
Adjoint endomorphism
Epimorphism (Surjective morphism)
Frobenius endomorphism
Monomorphism (Injective morphism)
Notes
References
External links
Morphisms |
9611 | https://en.wikipedia.org/wiki/E-commerce | E-commerce | E-commerce (electronic commerce) is the activity of electronically buying or selling of products on online services or over the Internet. E-commerce draws on technologies such as mobile commerce, electronic funds transfer, supply chain management, Internet marketing, online transaction processing, electronic data interchange (EDI), inventory management systems, and automated data collection systems. E-commerce is in turn driven by the technological advances of the semiconductor industry, and is the largest sector of the electronics industry.
E-commerce typically uses the web for at least a part of a transaction's life cycle although it may also use other technologies such as e-mail. Typical e-commerce transactions include the purchase of products (such as books from Amazon) or services (such as music downloads in the form of digital distribution such as iTunes Store). There are three areas of e-commerce: online retailing, electronic markets, and online auctions. E-commerce is supported by electronic business.
E-commerce businesses may also employ some or all of the following:
Online shopping for retail sales direct to consumers via web sites and mobile apps, and conversational commerce via live chat, chatbots, and voice assistants;
Providing or participating in online marketplaces, which process third-party business-to-consumer (B2C) or consumer-to-consumer (C2C) sales;
Business-to-business (B2B) buying and selling;
Gathering and using demographic data through web contacts and social media;
B2B electronic data interchange;
Marketing to prospective and established customers by e-mail or fax (for example, with newsletters);
Engaging in pretail for launching new products and services;
Online financial exchanges for currency exchanges or trading purposes.
History and timeline
The term was coined and first employed by Dr. Robert Jacobson, Principal Consultant to the California State Assembly's Utilities & Commerce Committee, in the title and text of California's Electronic Commerce Act, carried by the late Committee Chairwoman Gwen Moore (D-L.A.) and enacted in 1984.
A timeline for the development of e-commerce:
1971 or 1972: The ARPANET is used to arrange a cannabis sale between students at the Stanford Artificial Intelligence Laboratory and the Massachusetts Institute of Technology, later described as "the seminal act of e-commerce" in John Markoff's book What the Dormouse Said.
1976: Atalla Technovation (founded by Mohamed Atalla) and Bunker Ramo Corporation (founded by George Bunker and Simon Ramo) introduce products designed for secure online transaction processing, intended for financial institutions.
1979: Michael Aldrich demonstrates the first online shopping system.
1981: Thomson Holidays UK is the first business-to-business (B2B) online shopping system to be installed.
1982: Minitel was introduced nationwide in France by France Télécom and used for online ordering.
1983: California State Assembly holds first hearing on "electronic commerce" in Volcano, California. Testifying are CPUC, MCI Mail, Prodigy, CompuServe, Volcano Telephone, and Pacific Telesis. (Not permitted to testify is Quantum Technology, later to become AOL.) California's Electronic Commerce Act was passed in 1984.
1983: Karen Earle Lile (AKA Karen Bean) and Kendall Ross Bean create e-commerce service in San Francisco Bay Area. Buyers and sellers of pianos connect through a database created by Piano Finders on a Kaypro personal computer using DOS interface. Pianos for sale are listed on a Bulletin board system. Buyers print list of pianos for sale by a dot matrix printer. Customer service happened through a Piano Advice Hotline listed in the San Francisco Chronicle classified ads and money transferred by a bank wire transfer when a sale was completed.
1984: Gateshead SIS/Tesco is first B2C online shopping system and Mrs Snowball, 72, is the first online home shopper
1984: In April 1984, CompuServe launches the Electronic Mall in the US and Canada. It is the first comprehensive electronic commerce service.
1989: In May 1989, Sequoia Data Corp. introduced Compumarket, the first internet based system for e-commerce. Sellers and buyers could post items for sale and buyers could search the database and make purchases with a credit card.
1990: Tim Berners-Lee writes the first web browser, WorldWideWeb, using a NeXT computer.
1992: Book Stacks Unlimited in Cleveland opens a commercial sales website (www.books.com) selling books online with credit card processing.
1993: Paget Press releases edition No. 3 of the first app store, The Electronic AppWrapper
1994: Netscape releases the Navigator browser in October under the code name Mozilla. Netscape 1.0 is introduced in late 1994 with SSL encryption that made transactions secure.
1994: Ipswitch IMail Server becomes the first software available online for sale and immediate download via a partnership between Ipswitch, Inc. and OpenMarket.
1994: "Ten Summoner's Tales" by Sting becomes the first secure online purchase through NetMarket.
1995: The US National Science Foundation lifts its former strict prohibition of commercial enterprise on the Internet.
1995: Thursday 27 April 1995, the purchase of a book by Paul Stanfield, product manager for CompuServe UK, from W H Smith's shop within CompuServe's UK Shopping Centre is the UK's first national online shopping service secure transaction. The shopping service at launch featured W H Smith, Tesco, Virgin Megastores/Our Price, Great Universal Stores (GUS), Interflora, Dixons Retail, Past Times, PC World (retailer) and Innovations.
1995: Amazon is launched by Jeff Bezos.
1995: eBay is founded by computer programmer Pierre Omidyar as AuctionWeb. It is the first online auction site supporting person-to-person transactions.
1995: The first commercial-free 24-hour, internet-only radio stations, Radio HK and NetRadio start broadcasting.
1996: The use of Excalibur BBS with replicated "storefronts" was an early implementation of electronic commerce started by a group of SysOps in Australia and replicated to global partner sites.
1998: Electronic postal stamps can be purchased and downloaded for printing from the Web.
1999: Alibaba Group is established in China. Business.com sold for US$7.5 million to eCompanies, which was purchased in 1997 for US$149,000. The peer-to-peer filesharing software Napster launches. ATG Stores launches to sell decorative items for the home online.
1999: Global e-commerce reaches $150 billion
2000: The dot-com bust.
2001: eBay has the largest userbase of any e-commerce site.
2001: Alibaba.com achieved profitability in December 2001.
2002: eBay acquires PayPal for $1.5 billion. Niche retail companies Wayfair and NetShops are founded with the concept of selling products through several targeted domains, rather than a central portal.
2003: Amazon posts first yearly profit.
2004: DHgate.com, China's first online B2B transaction platform, is established, forcing other B2B sites to move away from the "yellow pages" model.
2007: Business.com acquired by R.H. Donnelley for $345 million.
2014: US e-commerce and online retail sales projected to reach $294 billion, an increase of 12 percent over 2013 and 9% of all retail sales. Alibaba Group has the largest Initial public offering ever, worth $25 billion.
2015: Amazon accounts for more than half of all e-commerce growth, selling almost 500 Million SKU's in the US.
2017: Retail e-commerce sales across the world reaches $2.304 trillion, which was a 24.8 percent increase than previous year.
2017: Global e-commerce transactions generate , including for business-to-business (B2B) transactions and for business-to-consumer (B2C) sales.
Business application
Some common applications related to electronic commerce are:
Governmental regulation
In the United States, California's Electronic Commerce Act (1984), enacted by the Legislature, and the more recent California Privacy Act (2020) enacted through a popular election proposition, control specifically how electronic commerce may be conducted in California. In the US in its entirety, electronic commerce activities are regulated more broadly by the Federal Trade Commission (FTC). These activities include the use of commercial e-mails, online advertising and consumer privacy. The CAN-SPAM Act of 2003 establishes national standards for direct marketing over e-mail. The Federal Trade Commission Act regulates all forms of advertising, including online advertising, and states that advertising must be truthful and non-deceptive. Using its authority under Section 5 of the FTC Act, which prohibits unfair or deceptive practices, the FTC has brought a number of cases to enforce the promises in corporate privacy statements, including promises about the security of consumers' personal information. As a result, any corporate privacy policy related to e-commerce activity may be subject to enforcement by the FTC.
The Ryan Haight Online Pharmacy Consumer Protection Act of 2008, which came into law in 2008, amends the Controlled Substances Act to address online pharmacies.
Conflict of laws in cyberspace is a major hurdle for harmonization of legal framework for e-commerce around the world. In order to give a uniformity to e-commerce law around the world, many countries adopted the UNCITRAL Model Law on Electronic Commerce (1996).
Internationally there is the International Consumer Protection and Enforcement Network (ICPEN), which was formed in 1991 from an informal network of government customer fair trade organisations. The purpose was stated as being to find ways of co-operating on tackling consumer problems connected with cross-border transactions in both goods and services, and to help ensure exchanges of information among the participants for mutual benefit and understanding. From this came Econsumer.gov, an ICPEN initiative since April 2001. It is a portal to report complaints about online and related transactions with foreign companies.
There is also Asia Pacific Economic Cooperation (APEC) was established in 1989 with the vision of achieving stability, security and prosperity for the region through free and open trade and investment. APEC has an Electronic Commerce Steering Group as well as working on common privacy regulations throughout the APEC region.
In Australia, Trade is covered under Australian Treasury Guidelines for electronic commerce and the Australian Competition & Consumer Commission regulates and offers advice on how to deal with businesses online, and offers specific advice on what happens if things go wrong.
In the United Kingdom, The Financial Services Authority (FSA) was formerly the regulating authority for most aspects of the EU's Payment Services Directive (PSD), until its replacement in 2013 by the Prudential Regulation Authority and the Financial Conduct Authority. The UK implemented the PSD through the Payment Services Regulations 2009 (PSRs), which came into effect on 1 November 2009. The PSR affects firms providing payment services and their customers. These firms include banks, non-bank credit card issuers and non-bank merchant acquirers, e-money issuers, etc. The PSRs created a new class of regulated firms known as payment institutions (PIs), who are subject to prudential requirements. Article 87 of the PSD requires the European Commission to report on the implementation and impact of the PSD by 1 November 2012.
In India, the Information Technology Act 2000 governs the basic applicability of e-commerce.
In China, the Telecommunications Regulations of the People's Republic of China (promulgated on 25 September 2000), stipulated the Ministry of Industry and Information Technology (MIIT) as the government department regulating all telecommunications related activities, including electronic commerce. On the same day, The Administrative Measures on Internet Information Services released, is the first administrative regulation to address profit-generating activities conducted through the Internet, and lay the foundation for future regulations governing e-commerce in China. On 28 August 2004, the eleventh session of the tenth NPC Standing Committee adopted The Electronic Signature Law, which regulates data message, electronic signature authentication and legal liability issues. It is considered the first law in China's e-commerce legislation. It was a milestone in the course of improving China's electronic commerce legislation, and also marks the entering of China's rapid development stage for electronic commerce legislation.
Forms
Contemporary electronic commerce can be classified into two categories. The first category is business based on types of goods sold (involves everything from ordering "digital" content for immediate online consumption, to ordering conventional goods and services, to "meta" services to facilitate other types of electronic commerce). The second category is based on the nature of the participant (B2B, B2C, C2B and C2C).
On the institutional level, big corporations and financial institutions use the internet to exchange financial data to facilitate domestic and international business. Data integrity and security are pressing issues for electronic commerce.
Aside from traditional e-commerce, the terms m-Commerce (mobile commerce) as well (around 2013) t-Commerce have also been used.
Global trends
In 2010, the United Kingdom had the highest per capita e-commerce spending in the world. As of 2013, the Czech Republic was the European country where e-commerce delivers the biggest contribution to the enterprises' total revenue. Almost a quarter (24%) of the country's total turnover is generated via the online channel.
Among emerging economies, China's e-commerce presence continues to expand every year. With 668 million Internet users, China's online shopping sales reached $253 billion in the first half of 2015, accounting for 10% of total Chinese consumer retail sales in that period. The Chinese retailers have been able to help consumers feel more comfortable shopping online. e-commerce transactions between China and other countries increased 32% to 2.3 trillion yuan ($375.8 billion) in 2012 and accounted for 9.6% of China's total international trade. In 2013, Alibaba had an e-commerce market share of 80% in China. In 2014, there were 600 million Internet users in China (twice as many as in the US), making it the world's biggest online market. China is also the largest e-commerce market in the world by value of sales, with an estimated in 2016. Research shows that Chinese consumer motivations are different enough from Western audiences to require unique e-commerce app designs instead of simply porting Western apps into the Chinese market.
Recent research clearly indicates that electronic commerce, commonly referred to as e-commerce, presently shapes the manner in which people shop for products. The GCC countries have a rapidly growing market and are characterized by a population that becomes wealthier (Yuldashev). As such, retailers have launched Arabic-language websites as a means to target this population. Secondly, there are predictions of increased mobile purchases and an expanding internet audience (Yuldashev). The growth and development of the two aspects make the GCC countries become larger players in the electronic commerce market with time progress. Specifically, research shows that the e-commerce market is expected to grow to over $20 billion by the year 2020 among these GCC countries (Yuldashev). The e-commerce market has also gained much popularity among western countries, and in particular Europe and the U.S. These countries have been highly characterized by consumer-packaged goods (CPG) (Geisler, 34). However, trends show that there are future signs of a reverse. Similar to the GCC countries, there has been increased purchase of goods and services in online channels rather than offline channels. Activist investors are trying hard to consolidate and slash their overall cost and the governments in western countries continue to impose more regulation on CPG manufacturers (Geisler, 36). In these senses, CPG investors are being forced to adapt to e-commerce as it is effective as well as a means for them to thrive.
In 2013, Brazil's e-commerce was growing quickly with retail e-commerce sales expected to grow at a double-digit pace through 2014. By 2016, eMarketer expected retail e-commerce sales in Brazil to reach $17.3 billion. India has an Internet user base of about 460 million as of December 2017. Despite being the third largest user base in the world, the penetration of the Internet is low compared to markets like the United States, United Kingdom or France but is growing at a much faster rate, adding around 6 million new entrants every month. In India, cash on delivery is the most preferred payment method, accumulating 75% of the e-retail activities. The India retail market is expected to rise from 2.5% in 2016 to 5% in 2020.
The future trends in the GCC countries will be similar to that of the western countries. Despite the forces that push business to adapt e-commerce as a means to sell goods and products, the manner in which customers make purchases is similar in countries from these two regions. For instance, there has been an increased usage of smartphones which comes in conjunction with an increase in the overall internet audience from the regions. Yuldashev writes that consumers are scaling up to more modern technology that allows for mobile marketing.
However, the percentage of smartphone and internet users who make online purchases is expected to vary in the first few years. It will be independent on the willingness of the people to adopt this new trend (The Statistics Portal). For example, UAE has the greatest smartphone penetration of 73.8 per cent and has 91.9 per cent of its population has access to the internet. On the other hand, smartphone penetration in Europe has been reported to be at 64.7 per cent (The Statistics Portal). Regardless, the disparity in percentage between these regions is expected to level out in future because e-commerce technology is expected to grow to allow for more users.
The e-commerce business within these two regions will result in competition. Government bodies at the country level will enhance their measures and strategies to ensure sustainability and consumer protection (Krings, et al.). These increased measures will raise the environmental and social standards in the countries, factors that will determine the success of the e-commerce market in these countries. For example, an adoption of tough sanctions will make it difficult for companies to enter the e-commerce market while lenient sanctions will allow ease of companies. As such, the future trends between GCC countries and the Western countries will be independent of these sanctions (Krings, et al.). These countries need to make rational conclusions in coming up with effective sanctions.
The rate of growth of the number of internet users in the Arab countries has been rapid – 13.1% in 2015. A significant portion of the e-commerce market in the Middle East comprises people in the 30–34 year age group. Egypt has the largest number of internet users in the region, followed by Saudi Arabia and Morocco; these constitute 3/4th of the region's share. Yet, internet penetration is low: 35% in Egypt and 65% in Saudi Arabia.
E-commerce has become an important tool for small and large businesses worldwide, not only to sell to customers, but also to engage them.
In 2012, e-commerce sales topped $1 trillion for the first time in history.
Mobile devices are playing an increasing role in the mix of e-commerce, this is also commonly called mobile commerce, or m-commerce. In 2014, one estimate saw purchases made on mobile devices making up 25% of the market by 2017.
For traditional businesses, one research stated that information technology and cross-border e-commerce is a good opportunity for the rapid development and growth of enterprises. Many companies have invested an enormous volume of investment in mobile applications. The DeLone and McLean Model stated that three perspectives contribute to a successful e-business: information system quality, service quality and users' satisfaction. There is no limit of time and space, there are more opportunities to reach out to customers around the world, and to cut down unnecessary intermediate links, thereby reducing the cost price, and can benefit from one on one large customer data analysis, to achieve a high degree of personal customization strategic plan, in order to fully enhance the core competitiveness of the products in the company.
Modern 3D graphics technologies, such as Facebook 3D Posts, are considered by some social media marketers and advertisers as a preferable way to promote consumer goods than static photos, and some brands like Sony are already paving the way for augmented reality commerce. Wayfair now lets you inspect a 3D version of its furniture in a home setting before buying.
Logistics
Logistics in e-commerce mainly concerns fulfillment. Online markets and retailers have to find the best possible way to fill orders and deliver products. Small companies usually control their own logistic operation because they do not have the ability to hire an outside company. Most large companies hire a fulfillment service that takes care of a company's logistic needs.
Contrary to common misconception, there are significant barriers to entry in e-commerce.
Impacts
Impact on markets and retailers
E-commerce markets are growing at noticeable rates. The online market is expected to grow by 56% in 2015–2020. In 2017, retail e-commerce sales worldwide amounted to 2.3 trillion US dollars and e-retail revenues are projected to grow to 4.891 trillion US dollars in 2021. Traditional markets are only expected 2% growth during the same time. Brick and mortar retailers are struggling because of online retailer's ability to offer lower prices and higher efficiency. Many larger retailers are able to maintain a presence offline and online by linking physical and online offerings.
E-commerce allows customers to overcome geographical barriers and allows them to purchase products anytime and from anywhere. Online and traditional markets have different strategies for conducting business. Traditional retailers offer fewer assortment of products because of shelf space where, online retailers often hold no inventory but send customer orders directly to the manufacture. The pricing strategies are also different for traditional and online retailers. Traditional retailers base their prices on store traffic and the cost to keep inventory. Online retailers base prices on the speed of delivery.
There are two ways for marketers to conduct business through e-commerce: fully online or online along with a brick and mortar store. Online marketers can offer lower prices, greater product selection, and high efficiency rates. Many customers prefer online markets if the products can be delivered quickly at relatively low price. However, online retailers cannot offer the physical experience that traditional retailers can. It can be difficult to judge the quality of a product without the physical experience, which may cause customers to experience product or seller uncertainty. Another issue regarding the online market is concerns about the security of online transactions. Many customers remain loyal to well-known retailers because of this issue.
Security is a primary problem for e-commerce in developed and developing countries. E-commerce security is protecting business' websites and customers from unauthorized access, use, alteration, or destruction. The type of threats include: malicious codes, unwanted programs (ad ware, spyware), phishing, hacking, and cyber vandalism. E-commerce websites use different tools to avert security threats. These tools include firewalls, encryption software, digital certificates, and passwords.
Impact on supply chain management
For a long time, companies had been troubled by the gap between the benefits which supply chain technology has and the solutions to deliver those benefits. However, the emergence of e-commerce has provided a more practical and effective way of delivering the benefits of the new supply chain technologies.
E-commerce has the capability to integrate all inter-company and intra-company functions, meaning that the three flows (physical flow, financial flow and information flow) of the supply chain could be also affected by e-commerce. The affections on physical flows improved the way of product and inventory movement level for companies. For the information flows, e-commerce optimized the capacity of information processing than companies used to have, and for the financial flows, e-commerce allows companies to have more efficient payment and settlement solutions.
In addition, e-commerce has a more sophisticated level of impact on supply chains: Firstly, the performance gap will be eliminated since companies can identify gaps between different levels of supply chains by electronic means of solutions; Secondly, as a result of e-commerce emergence, new capabilities such implementing ERP systems, like SAP ERP, Xero, or Megaventory, have helped companies to manage operations with customers and suppliers. Yet these new capabilities are still not fully exploited. Thirdly, technology companies would keep investing on new e-commerce software solutions as they are expecting investment return. Fourthly, e-commerce would help to solve many aspects of issues that companies may feel difficult to cope with, such as political barriers or cross-country changes. Finally, e-commerce provides companies a more efficient and effective way to collaborate with each other within the supply chain.
Impact on employment
E-commerce helps create new job opportunities due to information related services, software app and digital products. It also causes job losses. The areas with the greatest predicted job-loss are retail, postal, and travel agencies. The development of e-commerce will create jobs that require highly skilled workers to manage large amounts of information, customer demands, and production processes. In contrast, people with poor technical skills cannot enjoy the wages welfare. On the other hand, because e-commerce requires sufficient stocks that could be delivered to customers in time, the warehouse becomes an important element. Warehouse needs more staff to manage, supervise and organize, thus the condition of warehouse environment will be concerned by employees.
Impact on customers
E-commerce brings convenience for customers as they do not have to leave home and only need to browse website online, especially for buying the products which are not sold in nearby shops. It could help customers buy wider range of products and save customers' time. Consumers also gain power through online shopping. They are able to research products and compare prices among retailers. Also, online shopping often provides sales promotion or discounts code, thus it is more price effective for customers. Moreover, e-commerce provides products' detailed information; even the in-store staff cannot offer such detailed explanation. Customers can also review and track the order history online.
E-commerce technologies cut transaction costs by allowing both manufactures and consumers to skip through the intermediaries. This is achieved through by extending the search area best price deals and by group purchase. The success of e-commerce in urban and regional levels depend on how the local firms and consumers have adopted to e-commerce.
However, e-commerce lacks human interaction for customers, especially who prefer face-to-face connection. Customers are also concerned with the security of online transactions and tend to remain loyal to well-known retailers. In recent years, clothing retailers such as Tommy Hilfiger have started adding Virtual Fit platforms to their e-commerce sites to reduce the risk of customers buying the wrong sized clothes, although these vary greatly in their fit for purpose. When the customer regret the purchase of a product, it involves returning goods and refunding process. This process is inconvenient as customers need to pack and post the goods. If the products are expensive, large or fragile, it refers to safety issues.
Impact on the environment
In 2018, E-commerce generated 1.3 million tons of container cardboard in North America, an increase from 1.1 million in 2017. Only 35 percent of North American cardboard manufacturing capacity is from recycled content. The recycling rate in Europe is 80 percent and Asia is 93 percent. Amazon, the largest user of boxes, has a strategy to cut back on packing material and has reduced packaging material used by 19 percent by weight since 2016. Amazon is requiring retailers to manufacture their product packaging in a way that doesn't require additional shipping packaging. Amazon also has an 85-person team researching ways to reduce and improve their packaging and shipping materials.
Impact on traditional retail
E-commerce has been cited as a major force for the failure of major U.S. retailers in a trend frequently referred to as a "retail apocalypse." The rise of e-commerce outlets like Amazon has made it harder for traditional retailers to attract customers to their stores and forced companies to change their sales strategies. Many companies have turned to sales promotions and increased digital efforts to lure shoppers while shutting down brick-and-mortar locations. The trend has forced some traditional retailers to shutter its brick and mortar operations.
Distribution channels
E-commerce has grown in importance as companies have adopted pure-click and brick-and-click channel systems. We can distinguish pure-click and brick-and-click channel system adopted by companies.
Pure-click or pure-play companies are those that have launched a website without any previous existence as a firm.
Bricks-and-clicks companies are those existing companies that have added an online site for e-commerce.
Click-to-brick online retailers that later open physical locations to supplement their online efforts.
E-commerce may take place on retailers' Web sites or mobile apps, or those of e-commerce marketplaces such as on Amazon, or Tmall from AliBaba. Those channels may also be supported by conversational commerce, e.g. live chat or chatbots on Web sites. Conversational commerce may also be standalone such as live chat or chatbots on messaging apps and via voice assistants.
Recommendation
The contemporary e-commerce trend recommends companies to shift the traditional business model where focus on "standardized products, homogeneous market and long product life cycle" to the new business model where focus on "varied and customized products". E-commerce requires the company to have the ability to satisfy multiple needs of different customers and provide them with wider range of products.
With more choices of products, the information of products for customers to select and meet their needs become crucial. In order to address the mass customization principle to the company, the use of recommender system is suggested. This system helps recommend the proper products to the customers and helps customers make the decision during the purchasing process. The recommender system could be operated through the top sellers on the website, the demographics of customers or the consumers' buying behavior. However, there are 3 main ways of recommendations: recommending products to customers directly, providing detailed products' information and showing other buyers' opinions or critiques. It is benefit for consumer experience without physical shopping. In general, recommender system is used to contact customers online and assist finding the right products they want effectively and directly.
E-commerce during COVID-19
In March 2020, global retail website traffic hit 14.3 billion visits signifying an unprecedented growth of e-commerce during the lockdown of 2020. Studies show that in the US, as many as 29% of surveyed shoppers state that they will never go back to shopping in person again; in the UK, 43% of consumers state that they expect to keep on shopping the same way even after the lockdown is over.
Retail sales of e-commerce shows that COVID-19 has a significant impact on e-commerce and its sales are expected to reach $6.5 trillion by 2023.
See also
Comparison of free software e-commerce web application frameworks
Comparison of shopping cart software
Customer intelligence
Digital economy
E-commerce credit card payment system
Electronic bill payment
Electronic money
Non-store retailing
Paid content
Payments as a service
Types of e-commerce
Timeline of e-commerce
South Dakota v. Wayfair, Inc.
References
Further reading
External links
E-commerce
Electronics industry
Non-store retailing
Retail formats
Supply chain management |
9738 | https://en.wikipedia.org/wiki/Email | Email | Electronic mail (email or e-mail) is a method of exchanging messages ("mail") between people using electronic devices. Email was thus conceived as the electronic (digital) version of, or counterpart to, mail, at a time when "mail" meant only physical mail (hence e- + mail). Email later became a ubiquitous (very widely used) communication medium, to the point that in current use, an e-mail address is often treated as a basic and necessary part of many processes in business, commerce, government, education, entertainment, and other spheres of daily life in most countries. Email is the medium, and each message sent therewith is called an email (mass/count distinction).
Email's earliest development began in the 1960s, but at first users could send e-mail only to other users of the same computer. Some systems also supported a form of instant messaging, where sender and receiver needed to be online simultaneously. The history of modern Internet email services reaches back to the early ARPANET, with standards for encoding email messages published as early as 1973 (RFC 561). An email message sent in the early 1970s is similar to a basic email sent today. Ray Tomlinson is credited as the inventor of networked email; in 1971, he developed the first system able to send mail between users on different hosts across the ARPANET, using the @ sign to link the user name with a destination server. By the mid-1970s, this was the form recognized as email. At the time, though, email, like most computing, was mostly just for "computer geeks" in certain environments, such as engineering and the sciences. During the 1980s and 1990s, use of email became common in the worlds of business management, government, universities, and defense/military industries, but much of the public did not use it yet. Starting with the advent of web browsers in the mid-1990s, use of email began to extend to the rest of the public, no longer something only for geeks in certain professions or industries. By the 2010s, webmail (the web-era form of email) had gained its ubiquitous status.
Email operates across computer networks, primarily the Internet. Today's email systems are based on a store-and-forward model. Email servers accept, forward, deliver, and store messages. Neither the users nor their computers are required to be online simultaneously; they need to connect, typically to a mail server or a webmail interface to send or receive messages or download it.
Originally an ASCII text-only communications medium, Internet email was extended by Multipurpose Internet Mail Extensions (MIME) to carry text in other character sets and multimedia content attachments. International email, with internationalized email addresses using UTF-8, is standardized but not widely adopted.
Terminology
Prior to the spread of electronic mail services, the word email, here derived from the French word émail, primarily referred to vitreous enamel or sometimes ceramic glaze. A rare term, it was mainly used by art historians and medievalists.
Historically, the term electronic mail is any electronic document transmission. For example, several writers in the early 1970s used the term to refer to fax document transmission. As a result, finding its first use is difficult with the specific meaning it has today.
The term electronic mail has been in use with its current meaning since at least 1975, and variations of the shorter E-mail have been in use since at least 1979:
email is now the common form, and recommended by style guides. It is the form required by IETF Requests for Comments (RFC) and working groups. This spelling also appears in most dictionaries.
e-mail is the form favored in edited published American English and British English writing as reflected in the Corpus of Contemporary American English data, but is falling out of favor in some style guides.
EMail is a traditional form used in RFCs for the "Author's Address" and is required "for historical reasons".
E-mail is sometimes used, capitalizing the initial E as in similar abbreviations like E-piano, E-guitar, A-bomb, and H-bomb.
In the original protocol, RFC 524, none of these forms was used. The service is simply referred to as mail, and a single piece of electronic mail is called a message.
An Internet email consists of an envelope and content; the content consists of a header and a body.
Origin
Computer-based mail and messaging became possible with the advent of time-sharing computers in the early 1960s, and informal methods of using shared files to pass messages were soon expanded into the first mail systems. Most developers of early mainframes and minicomputers developed similar, but generally incompatible, mail applications. Over time, a complex web of gateways and routing systems linked many of them. Many US universities were part of the ARPANET (created in the late 1960s), which aimed at software portability between its systems. In 1971 the first ARPANET network email was sent, introducing the now-familiar address syntax with the '@' symbol designating the user's system address. The Simple Mail Transfer Protocol (SMTP) protocol was introduced in 1981.
For a time in the late 1980s and early 1990s, it seemed likely that either a proprietary commercial system or the X.400 email system, part of the Government Open Systems Interconnection Profile (GOSIP), would predominate. However, once the final restrictions on carrying commercial traffic over the Internet ended in 1995, a combination of factors made the current Internet suite of SMTP, POP3 and IMAP email protocols the standard.
Operation
The following is a typical sequence of events that takes place when sender Alice transmits a message using a mail user agent (MUA) addressed to the email address of the recipient.
The MUA formats the message in email format and uses the submission protocol, a profile of the Simple Mail Transfer Protocol (SMTP), to send the message content to the local mail submission agent (MSA), in this case smtp.a.org.
The MSA determines the destination address provided in the SMTP protocol (not from the message header) — in this case, [email protected] — which is a fully qualified domain address (FQDA). The part before the @ sign is the local part of the address, often the username of the recipient, and the part after the @ sign is a domain name. The MSA resolves a domain name to determine the fully qualified domain name of the mail server in the Domain Name System (DNS).
The DNS server for the domain b.org (ns.b.org) responds with any MX records listing the mail exchange servers for that domain, in this case mx.b.org, a message transfer agent (MTA) server run by the recipient's ISP.
smtp.a.org sends the message to mx.b.org using SMTP. This server may need to forward the message to other MTAs before the message reaches the final message delivery agent (MDA).
The MDA delivers it to the mailbox of user bob.
Bob's MUA picks up the message using either the Post Office Protocol (POP3) or the Internet Message Access Protocol (IMAP).
In addition to this example, alternatives and complications exist in the email system:
Alice or Bob may use a client connected to a corporate email system, such as IBM Lotus Notes or Microsoft Exchange. These systems often have their own internal email format and their clients typically communicate with the email server using a vendor-specific, proprietary protocol. The server sends or receives email via the Internet through the product's Internet mail gateway which also does any necessary reformatting. If Alice and Bob work for the same company, the entire transaction may happen completely within a single corporate email system.
Alice may not have an MUA on her computer but instead may connect to a webmail service.
Alice's computer may run its own MTA, so avoiding the transfer at step 1.
Bob may pick up his email in many ways, for example logging into mx.b.org and reading it directly, or by using a webmail service.
Domains usually have several mail exchange servers so that they can continue to accept mail even if the primary is not available.
Many MTAs used to accept messages for any recipient on the Internet and do their best to deliver them. Such MTAs are called open mail relays. This was very important in the early days of the Internet when network connections were unreliable. However, this mechanism proved to be exploitable by originators of unsolicited bulk email and as a consequence open mail relays have become rare, and many MTAs do not accept messages from open mail relays.
Message format
The basic Internet message format used for email is defined by RFC 5322, with encoding of non-ASCII data and multimedia content attachments defined in RFC 2045 through RFC 2049, collectively called Multipurpose Internet Mail Extensions or MIME. The extensions in International email apply only to email. RFC 5322 replaced the earlier RFC 2822 in 2008, then RFC 2822 in 2001 replaced RFC 822 – the standard for Internet email for decades. Published in 1982, RFC 822 was based on the earlier RFC 733 for the ARPANET.
Internet email messages consist of two sections, 'header' and 'body'. These are known as 'content'. The header is structured into fields such as From, To, CC, Subject, Date, and other information about the email. In the process of transporting email messages between systems, SMTP communicates delivery parameters and information using message header fields. The body contains the message, as unstructured text, sometimes containing a signature block at the end. The header is separated from the body by a blank line.
Message header
RFC 5322 specifies the syntax of the email header. Each email message has a header (the "header section" of the message, according to the specification), comprising a number of fields ("header fields"). Each field has a name ("field name" or "header field name"), followed by the separator character ":", and a value ("field body" or "header field body").
Each field name begins in the first character of a new line in the header section, and begins with a non-whitespace printable character. It ends with the separator character ":". The separator is followed by the field value (the "field body"). The value can continue onto subsequent lines if those lines have space or tab as their first character. Field names and, without SMTPUTF8, field bodies are restricted to 7-bit ASCII characters. Some non-ASCII values may be represented using MIME encoded words.
Header fields
Email header fields can be multi-line, with each line recommended to be no more than 78 characters, although the limit is 998 characters. Header fields defined by RFC 5322 contain only US-ASCII characters; for encoding characters in other sets, a syntax specified in RFC 2047 may be used. In some examples, the IETF EAI working group defines some standards track extensions, replacing previous experimental extensions so UTF-8 encoded Unicode characters may be used within the header. In particular, this allows email addresses to use non-ASCII characters. Such addresses are supported by Google and Microsoft products, and promoted by some government agents.
The message header must include at least the following fields:
From: The email address, and, optionally, the name of the author(s). Some email clients are changeable through account settings.
Date: The local time and date the message was written. Like the From: field, many email clients fill this in automatically before sending. The recipient's client may display the time in the format and time zone local to them.
RFC 3864 describes registration procedures for message header fields at the IANA; it provides for permanent and provisional field names, including also fields defined for MIME, netnews, and HTTP, and referencing relevant RFCs. Common header fields for email include:
To: The email address(es), and optionally name(s) of the message's recipient(s). Indicates primary recipients (multiple allowed), for secondary recipients see Cc: and Bcc: below.
Subject: A brief summary of the topic of the message. Certain abbreviations are commonly used in the subject, including "RE:" and "FW:".
Cc: Carbon copy; Many email clients mark email in one's inbox differently depending on whether they are in the To: or Cc: list.
Bcc: Blind carbon copy; addresses are usually only specified during SMTP delivery, and not usually listed in the message header.
Content-Type: Information about how the message is to be displayed, usually a MIME type.
Precedence: commonly with values "bulk", "junk", or "list"; used to indicate automated "vacation" or "out of office" responses should not be returned for this mail, e.g. to prevent vacation notices from sent to all other subscribers of a mailing list. Sendmail uses this field to affect prioritization of queued email, with "Precedence: special-delivery" messages delivered sooner. With modern high-bandwidth networks, delivery priority is less of an issue than it was. Microsoft Exchange respects a fine-grained automatic response suppression mechanism, the X-Auto-Response-Suppress field.
Message-ID: Also an automatic-generated field to prevent multiple deliveries and for reference in In-Reply-To: (see below).
In-Reply-To: Message-ID of the message this is a reply to. Used to link related messages together. This field only applies to reply messages.
References: Message-ID of the message this is a reply to, and the message-id of the message the previous reply was a reply to, etc.
: Address should be used to reply to the message.
Sender: Address of the sender acting on behalf of the author listed in the From: field (secretary, list manager, etc.).
Archived-At: A direct link to the archived form of an individual email message.
The To: field may be unrelated to the addresses to which the message is delivered. The delivery list is supplied separately to the transport protocol, SMTP, which may be extracted from the header content. The "To:" field is similar to the addressing at the top of a conventional letter delivered according to the address on the outer envelope. In the same way, the "From:" field may not be the sender. Some mail servers apply email authentication systems to messages relayed. Data pertaining to the server's activity is also part of the header, as defined below.
SMTP defines the trace information of a message saved in the header using the following two fields:
Received: after an SMTP server accepts a message, it inserts this trace record at the top of the header (last to first).
Return-Path: after the delivery SMTP server makes the final delivery of a message, it inserts this field at the top of the header.
Other fields added on top of the header by the receiving server may be called trace fields.
Authentication-Results: after a server verifies authentication, it can save the results in this field for consumption by downstream agents.
Received-SPF: stores results of SPF checks in more detail than Authentication-Results.
DKIM-Signature: stores results of DomainKeys Identified Mail (DKIM) decryption to verify the message was not changed after it was sent.
Auto-Submitted: is used to mark automatic-generated messages.
VBR-Info: claims VBR whitelisting
Message body
Content encoding
Internet email was designed for 7-bit ASCII. Most email software is 8-bit clean, but must assume it will communicate with 7-bit servers and mail readers. The MIME standard introduced character set specifiers and two content transfer encodings to enable transmission of non-ASCII data: quoted printable for mostly 7-bit content with a few characters outside that range and base64 for arbitrary binary data. The 8BITMIME and BINARY extensions were introduced to allow transmission of mail without the need for these encodings, but many mail transport agents may not support them. In some countries, e-mail software violates by sending raw non-ASCII text and several encoding schemes co-exist; as a result, by default, the message in a non-Latin alphabet language appears in non-readable form (the only exception is a coincidence if the sender and receiver use the same encoding scheme). Therefore, for international character sets, Unicode is growing in popularity.
Plain text and HTML
Most modern graphic email clients allow the use of either plain text or HTML for the message body at the option of the user. HTML email messages often include an automatic-generated plain text copy for compatibility. Advantages of HTML include the ability to include in-line links and images, set apart previous messages in block quotes, wrap naturally on any display, use emphasis such as underlines and italics, and change font styles. Disadvantages include the increased size of the email, privacy concerns about web bugs, abuse of HTML email as a vector for phishing attacks and the spread of malicious software.
Some e-mail clients interpret the body as HTML even in the absence of a Content-Type: html header field; this may cause various problems.
Some web-based mailing lists recommend all posts be made in plain-text, with 72 or 80 characters per line for all the above reasons, and because they have a significant number of readers using text-based email clients such as Mutt. Some Microsoft email clients may allow rich formatting using their proprietary Rich Text Format (RTF), but this should be avoided unless the recipient is guaranteed to have a compatible email client.
Servers and client applications
Messages are exchanged between hosts using the Simple Mail Transfer Protocol with software programs called mail transfer agents (MTAs); and delivered to a mail store by programs called mail delivery agents (MDAs, also sometimes called local delivery agents, LDAs). Accepting a message obliges an MTA to deliver it, and when a message cannot be delivered, that MTA must send a bounce message back to the sender, indicating the problem.
Users can retrieve their messages from servers using standard protocols such as POP or IMAP, or, as is more likely in a large corporate environment, with a proprietary protocol specific to Novell Groupwise, Lotus Notes or Microsoft Exchange Servers. Programs used by users for retrieving, reading, and managing email are called mail user agents (MUAs).
When opening an email, it is marked as "read", which typically visibly distinguishes it from "unread" messages on clients' user interfaces. Email clients may allow hiding read emails from the inbox so the user can focus on the unread.
Mail can be stored on the client, on the server side, or in both places. Standard formats for mailboxes include Maildir and mbox. Several prominent email clients use their own proprietary format and require conversion software to transfer email between them. Server-side storage is often in a proprietary format but since access is through a standard protocol such as IMAP, moving email from one server to another can be done with any MUA supporting the protocol.
Many current email users do not run MTA, MDA or MUA programs themselves, but use a web-based email platform, such as Gmail or Yahoo! Mail, that performs the same tasks. Such webmail interfaces allow users to access their mail with any standard web browser, from any computer, rather than relying on a local email client.
Filename extensions
Upon reception of email messages, email client applications save messages in operating system files in the file system. Some clients save individual messages as separate files, while others use various database formats, often proprietary, for collective storage. A historical standard of storage is the mbox format. The specific format used is often indicated by special filename extensions:
eml
Used by many email clients including Novell GroupWise, Microsoft Outlook Express, Lotus notes, Windows Mail, Mozilla Thunderbird, and Postbox. The files contain the email contents as plain text in MIME format, containing the email header and body, including attachments in one or more of several formats.
emlx
Used by Apple Mail.
msg
Used by Microsoft Office Outlook and OfficeLogic Groupware.
mbx
Used by Opera Mail, KMail, and Apple Mail based on the mbox format.
Some applications (like Apple Mail) leave attachments encoded in messages for searching while also saving separate copies of the attachments. Others separate attachments from messages and save them in a specific directory.
URI scheme mailto
The URI scheme, as registered with the IANA, defines the mailto: scheme for SMTP email addresses. Though its use is not strictly defined, URLs of this form are intended to be used to open the new message window of the user's mail client when the URL is activated, with the address as defined by the URL in the To: field. Many clients also support query string parameters for the other email fields, such as its subject line or carbon copy recipients.
Types
Web-based email
Many email providers have a web-based email client (e.g. AOL Mail, Gmail, Outlook.com and Yahoo! Mail). This allows users to log into the email account by using any compatible web browser to send and receive their email. Mail is typically not downloaded to the web client, so can't be read without a current Internet connection.
POP3 email servers
The Post Office Protocol 3 (POP3) is a mail access protocol used by a client application to read messages from the mail server. Received messages are often deleted from the server. POP supports simple download-and-delete requirements for access to remote mailboxes (termed maildrop in the POP RFC's). POP3 allows you to download email messages on your local computer and read them even when you are offline.
IMAP email servers
The Internet Message Access Protocol (IMAP) provides features to manage a mailbox from multiple devices. Small portable devices like smartphones are increasingly used to check email while traveling and to make brief replies, larger devices with better keyboard access being used to reply at greater length. IMAP shows the headers of messages, the sender and the subject and the device needs to request to download specific messages. Usually, the mail is left in folders in the mail server.
MAPI email servers
Messaging Application Programming Interface (MAPI) is used by Microsoft Outlook to communicate to Microsoft Exchange Server - and to a range of other email server products such as Axigen Mail Server, Kerio Connect, Scalix, Zimbra, HP OpenMail, IBM Lotus Notes, Zarafa, and Bynari where vendors have added MAPI support to allow their products to be accessed directly via Outlook.
Uses
Business and organizational use
Email has been widely accepted by businesses, governments and non-governmental organizations in the developed world, and it is one of the key parts of an 'e-revolution' in workplace communication (with the other key plank being widespread adoption of highspeed Internet). A sponsored 2010 study on workplace communication found 83% of U.S. knowledge workers felt email was critical to their success and productivity at work.
It has some key benefits to business and other organizations, including:
Facilitating logistics
Much of the business world relies on communications between people who are not physically in the same building, area, or even country; setting up and attending an in-person meeting, telephone call, or conference call can be inconvenient, time-consuming, and costly. Email provides a method of exchanging information between two or more people with no set-up costs and that is generally far less expensive than a physical meeting or phone call.
Helping with synchronization
With real time communication by meetings or phone calls, participants must work on the same schedule, and each participant must spend the same amount of time in the meeting or call. Email allows asynchrony: each participant may control their schedule independently. Batch processing of incoming emails can improve workflow compared to interrupting calls.
Reducing cost
Sending an email is much less expensive than sending postal mail, or long distance telephone calls, telex or telegrams.
Increasing speed
Much faster than most of the alternatives.
Creating a "written" record
Unlike a telephone or in-person conversation, email by its nature creates a detailed written record of the communication, the identity of the sender(s) and recipient(s) and the date and time the message was sent. In the event of a contract or legal dispute, saved emails can be used to prove that an individual was advised of certain issues, as each email has the date and time recorded on it.
Possibility of auto-processing and improved distribution
As well pre-processing of customer's orders and/or addressing the person in charge can be realized by automated procedures.
Email marketing
Email marketing via "opt-in" is often successfully used to send special sales offerings and new product information. Depending on the recipient's culture, email sent without permission—such as an "opt-in"—is likely to be viewed as unwelcome "email spam".
Personal use
Personal computer
Many users access their personal emails from friends and family members using a personal computer in their house or apartment.
Mobile
Email has become used on smartphones and on all types of computers. Mobile "apps" for email increase accessibility to the medium for users who are out of their homes. While in the earliest years of email, users could only access email on desktop computers, in the 2010s, it is possible for users to check their email when they are away from home, whether they are across town or across the world. Alerts can also be sent to the smartphone or other devices to notify them immediately of new messages. This has given email the ability to be used for more frequent communication between users and allowed them to check their email and write messages throughout the day. , there were approximately 1.4 billion email users worldwide and 50 billion non-spam emails that were sent daily.
Individuals often check emails on smartphones for both personal and work-related messages. It was found that US adults check their email more than they browse the web or check their Facebook accounts, making email the most popular activity for users to do on their smartphones. 78% of the respondents in the study revealed that they check their email on their phone. It was also found that 30% of consumers use only their smartphone to check their email, and 91% were likely to check their email at least once per day on their smartphone. However, the percentage of consumers using email on a smartphone ranges and differs dramatically across different countries. For example, in comparison to 75% of those consumers in the US who used it, only 17% in India did.
Declining use among young people
, the number of Americans visiting email web sites had fallen 6 percent after peaking in November 2009. For persons 12 to 17, the number was down 18 percent. Young people preferred instant messaging, texting and social media. Technology writer Matt Richtel said in The New York Times that email was like the VCR, vinyl records and film cameras—no longer cool and something older people do.
A 2015 survey of Android users showed that persons 13 to 24 used messaging apps 3.5 times as much as those over 45, and were far less likely to use email.
Issues
Attachment size limitation
Email messages may have one or more attachments, which are additional files that are appended to the email. Typical attachments include Microsoft Word documents, PDF documents, and scanned images of paper documents. In principle, there is no technical restriction on the size or number of attachments. However, in practice, email clients, servers, and Internet service providers implement various limitations on the size of files, or complete email - typically to 25MB or less. Furthermore, due to technical reasons, attachment sizes as seen by these transport systems can differ from what the user sees, which can be confusing to senders when trying to assess whether they can safely send a file by email. Where larger files need to be shared, various file hosting services are available and commonly used.
Information overload
The ubiquity of email for knowledge workers and "white collar" employees has led to concerns that recipients face an "information overload" in dealing with increasing volumes of email. With the growth in mobile devices, by default employees may also receive work-related emails outside of their working day. This can lead to increased stress and decreased satisfaction with work. Some observers even argue it could have a significant negative economic effect, as efforts to read the many emails could reduce productivity.
Spam
Email "spam" is unsolicited bulk email. The low cost of sending such email meant that, by 2003, up to 30% of total email traffic was spam, and was threatening the usefulness of email as a practical tool. The US CAN-SPAM Act of 2003 and similar laws elsewhere had some impact, and a number of effective anti-spam techniques now largely mitigate the impact of spam by filtering or rejecting it for most users, but the volume sent is still very high—and increasingly consists not of advertisements for products, but malicious content or links. In September 2017, for example, the proportion of spam to legitimate email rose to 59.56%. The percentage of spam email in 2021 is estimated to be 85%.
Malware
A range of malicious email types exist. These range from various types of email scams, including "social engineering" scams such as advance-fee scam "Nigerian letters", to phishing, email bombardment and email worms.
Email spoofing
Email spoofing occurs when the email message header is designed to make the message appear to come from a known or trusted source. Email spam and phishing methods typically use spoofing to mislead the recipient about the true message origin. Email spoofing may be done as a prank, or as part of a criminal effort to defraud an individual or organization. An example of a potentially fraudulent email spoofing is if an individual creates an email that appears to be an invoice from a major company, and then sends it to one or more recipients. In some cases, these fraudulent emails incorporate the logo of the purported organization and even the email address may appear legitimate.
Email bombing
Email bombing is the intentional sending of large volumes of messages to a target address. The overloading of the target email address can render it unusable and can even cause the mail server to crash.
Privacy concerns
Today it can be important to distinguish between the Internet and internal email systems. Internet email may travel and be stored on networks and computers without the sender's or the recipient's control. During the transit time it is possible that third parties read or even modify the content. Internal mail systems, in which the information never leaves the organizational network, may be more secure, although information technology personnel and others whose function may involve monitoring or managing may be accessing the email of other employees.
Email privacy, without some security precautions, can be compromised because:
email messages are generally not encrypted.
email messages have to go through intermediate computers before reaching their destination, meaning it is relatively easy for others to intercept and read messages.
many Internet Service Providers (ISP) store copies of email messages on their mail servers before they are delivered. The backups of these can remain for up to several months on their server, despite deletion from the mailbox.
the "Received:"-fields and other information in the email can often identify the sender, preventing anonymous communication.
web bugs invisibly embedded in HTML content can alert the sender of any email whenever an email is rendered as HTML (some e-mail clients do this when the user reads, or re-reads the e-mail) and from which IP address. It can also reveal whether an email was read on a smartphone or a PC, or Apple Mac device via the user agent string.
There are cryptography applications that can serve as a remedy to one or more of the above. For example, Virtual Private Networks or the Tor network can be used to encrypt traffic from the user machine to a safer network while GPG, PGP, SMEmail, or S/MIME can be used for end-to-end message encryption, and SMTP STARTTLS or SMTP over Transport Layer Security/Secure Sockets Layer can be used to encrypt communications for a single mail hop between the SMTP client and the SMTP server.
Additionally, many mail user agents do not protect logins and passwords, making them easy to intercept by an attacker. Encrypted authentication schemes such as SASL prevent this. Finally, the attached files share many of the same hazards as those found in peer-to-peer filesharing. Attached files may contain trojans or viruses.
Legal contracts
It is possible for an exchange of emails to form a binding contract, so users must be careful about what they send through email correspondence. A signature block on an email may be interpreted as satisfying a signature requirement for a contract.
Flaming
Flaming occurs when a person sends a message (or many messages) with angry or antagonistic content. The term is derived from the use of the word incendiary to describe particularly heated email discussions. The ease and impersonality of email communications mean that the social norms that encourage civility in person or via telephone do not exist and civility may be forgotten.
Email bankruptcy
Also known as "email fatigue", email bankruptcy is when a user ignores a large number of email messages after falling behind in reading and answering them. The reason for falling behind is often due to information overload and a general sense there is so much information that it is not possible to read it all. As a solution, people occasionally send a "boilerplate" message explaining that their email inbox is full, and that they are in the process of clearing out all the messages. Harvard University law professor Lawrence Lessig is credited with coining this term, but he may only have popularized it.
Internationalization
Originally Internet email was completely ASCII text-based. MIME now allows body content text and some header content text in international character sets, but other headers and email addresses using UTF-8, while standardized have yet to be widely adopted.
Tracking of sent mail
The original SMTP mail service provides limited mechanisms for tracking a transmitted message, and none for verifying that it has been delivered or read. It requires that each mail server must either deliver it onward or return a failure notice (bounce message), but both software bugs and system failures can cause messages to be lost. To remedy this, the IETF introduced Delivery Status Notifications (delivery receipts) and Message Disposition Notifications (return receipts); however, these are not universally deployed in production.
Many ISPs now deliberately disable non-delivery reports (NDRs) and delivery receipts due to the activities of spammers:
Delivery Reports can be used to verify whether an address exists and if so, this indicates to a spammer that it is available to be spammed.
If the spammer uses a forged sender email address (email spoofing), then the innocent email address that was used can be flooded with NDRs from the many invalid email addresses the spammer may have attempted to mail. These NDRs then constitute spam from the ISP to the innocent user.
In the absence of standard methods, a range of system based around the use of web bugs have been developed. However, these are often seen as underhand or raising privacy concerns, and only work with email clients that support rendering of HTML. Many mail clients now default to not showing "web content". Webmail providers can also disrupt web bugs by pre-caching images.
See also
Anonymous remailer
Anti-spam techniques
biff
Bounce message
Comparison of email clients
Dark Mail Alliance
Disposable email address
E-card
Electronic mailing list
Email art
Email authentication
Email digest
Email encryption
Email hosting service
Email storm
Email tracking
HTML email
Information overload
Internet fax
Internet mail standards
List of email subject abbreviations
MCI Mail
Netiquette
Posting style
Privacy-enhanced Electronic Mail
Push email
RSS
Telegraphy
Unicode and email
Usenet quoting
Webmail, Comparison of webmail providers
X-Originating-IP
X.400
Yerkish
Notes
References
Further reading
Cemil Betanov, Introduction to X.400, Artech House, .
Marsha Egan, "Inbox Detox and The Habit of Email Excellence ", Acanthus Publishing
Lawrence Hughes, Internet e-mail Protocols, Standards and Implementation, Artech House Publishers, .
Kevin Johnson, Internet Email Protocols: A Developer's Guide, Addison-Wesley Professional, .
Pete Loshin, Essential Email Standards: RFCs and Protocols Made Practical, John Wiley & Sons, .
Sara Radicati, Electronic Mail: An Introduction to the X.400 Message Handling Standards, Mcgraw-Hill, .
John Rhoton, Programmer's Guide to Internet Mail: SMTP, POP, IMAP, and LDAP, Elsevier, .
John Rhoton, X.400 and SMTP: Battle of the E-mail Protocols, Elsevier, .
David Wood, Programming Internet Mail, O'Reilly, .
External links
IANA's list of standard header fields
The History of Email is Dave Crocker's attempt at capturing the sequence of 'significant' occurrences in the evolution of email; a collaborative effort that also cites this page.
The History of Electronic Mail is a personal memoir by the implementer of an early email system
A Look at the Origins of Network Email is a short, yet vivid recap of the key historical facts
Business E-Mail Compromise - An Emerging Global Threat, FBI
Explained from first principles, a 2021 article attempting to summarize more than 100 RFCs
Internet terminology
Mail
History of the Internet
Computer-related introductions in 1971 |
9966 | https://en.wikipedia.org/wiki/Elliptic-curve%20cryptography | Elliptic-curve cryptography | Elliptic-curve cryptography (ECC) is an approach to public-key cryptography based on the algebraic structure of elliptic curves over finite fields. ECC allows smaller keys compared to non-EC cryptography (based on plain Galois fields) to provide equivalent security.
Elliptic curves are applicable for key agreement, digital signatures, pseudo-random generators and other tasks. Indirectly, they can be used for encryption by combining the key agreement with a symmetric encryption scheme. Elliptic curves are also used in several integer factorization algorithms based on elliptic curves that have applications in cryptography, such as Lenstra elliptic-curve factorization.
Rationale
Public-key cryptography is based on the intractability of certain mathematical problems. Early public-key systems based their security on the assumption that it is difficult to factor a large integer composed of two or more large prime factors. For later elliptic-curve-based protocols, the base assumption is that finding the discrete logarithm of a random elliptic curve element with respect to a publicly known base point is infeasible: this is the "elliptic curve discrete logarithm problem" (ECDLP). The security of elliptic curve cryptography depends on the ability to compute a point multiplication and the inability to compute the multiplicand given the original and product points. The size of the elliptic curve, measured by the total number of discrete integer pairs satisfying the curve equation, determines the difficulty of the problem.
The U.S. National Institute of Standards and Technology (NIST) has endorsed elliptic curve cryptography in its Suite B set of recommended algorithms, specifically elliptic-curve Diffie–Hellman (ECDH) for key exchange and Elliptic Curve Digital Signature Algorithm (ECDSA) for digital signature. The U.S. National Security Agency (NSA) allows their use for protecting information classified up to top secret with 384-bit keys. However, in August 2015, the NSA announced that it plans to replace Suite B with a new cipher suite due to concerns about quantum computing attacks on ECC.
While the RSA patent expired in 2000, there may be patents in force covering certain aspects of ECC technology. However some argue that the US government elliptic curve digital signature standard (ECDSA; NIST FIPS 186-3) and certain practical ECC-based key exchange schemes (including ECDH) can be implemented without infringing them, including RSA Laboratories and Daniel J. Bernstein.
The primary benefit promised by elliptic curve cryptography is a smaller key size, reducing storage and transmission requirements, i.e. that an elliptic curve group could provide the same level of security afforded by an RSA-based system with a large modulus and correspondingly larger key: for example, a 256-bit elliptic curve public key should provide comparable security to a 3072-bit RSA public key.
History
The use of elliptic curves in cryptography was suggested independently by Neal Koblitz and Victor S. Miller in 1985. Elliptic curve cryptography algorithms entered wide use in 2004 to 2005.
Theory
For current cryptographic purposes, an elliptic curve is a plane curve over a finite field (rather than the real numbers) which consists of the points satisfying the equation:
along with a distinguished point at infinity, denoted ∞. The coordinates here are to be chosen from a fixed finite field of characteristic not equal to 2 or 3, or the curve equation will be somewhat more complicated.
This set together with the group operation of elliptic curves is an abelian group, with the point at infinity as an identity element. The structure of the group is inherited from the divisor group of the underlying algebraic variety:
Cryptographic schemes
Several discrete logarithm-based protocols have been adapted to elliptic curves, replacing the group with an elliptic curve:
The Elliptic Curve Diffie–Hellman (ECDH) key agreement scheme is based on the Diffie–Hellman scheme,
The Elliptic Curve Integrated Encryption Scheme (ECIES), also known as Elliptic Curve Augmented Encryption Scheme or simply the Elliptic Curve Encryption Scheme,
The Elliptic Curve Digital Signature Algorithm (ECDSA) is based on the Digital Signature Algorithm,
The deformation scheme using Harrison's p-adic Manhattan metric,
The Edwards-curve Digital Signature Algorithm (EdDSA) is based on Schnorr signature and uses twisted Edwards curves,
The ECMQV key agreement scheme is based on the MQV key agreement scheme,
The ECQV implicit certificate scheme.
At the RSA Conference 2005, the National Security Agency (NSA) announced Suite B which exclusively uses ECC for digital signature generation and key exchange. The suite is intended to protect both classified and unclassified national security systems and information.
Recently, a large number of cryptographic primitives based on bilinear mappings on various elliptic curve groups, such as the Weil and Tate pairings, have been introduced. Schemes based on these primitives provide efficient identity-based encryption as well as pairing-based signatures, signcryption, key agreement, and proxy re-encryption.
Implementation
Some common implementation considerations include:
Domain parameters
To use ECC, all parties must agree on all the elements defining the elliptic curve, that is, the domain parameters of the scheme. The size of the field used is typically either prime (and denoted as p) or is a power of two (); the latter case is called the binary case, and also necessitates the choice of an auxiliary curve denoted by f. Thus the field is defined by p in the prime case and the pair of m and f in the binary case. The elliptic curve is defined by the constants a and b used in its defining equation. Finally, the cyclic subgroup is defined by its generator (a.k.a. base point) G. For cryptographic application the order of G, that is the smallest positive number n such that (the point at infinity of the curve, and the identity element), is normally prime. Since n is the size of a subgroup of it follows from Lagrange's theorem that the number is an integer. In cryptographic applications this number h, called the cofactor, must be small () and, preferably, . To summarize: in the prime case, the domain parameters are ; in the binary case, they are .
Unless there is an assurance that domain parameters were generated by a party trusted with respect to their use, the domain parameters must be validated before use.
The generation of domain parameters is not usually done by each participant because this involves computing the number of points on a curve which is time-consuming and troublesome to implement. As a result, several standard bodies published domain parameters of elliptic curves for several common field sizes. Such domain parameters are commonly known as "standard curves" or "named curves"; a named curve can be referenced either by name or by the unique object identifier defined in the standard documents:
NIST, Recommended Elliptic Curves for Government Use
SECG, SEC 2: Recommended Elliptic Curve Domain Parameters
ECC Brainpool (RFC 5639), ECC Brainpool Standard Curves and Curve Generation
SECG test vectors are also available. NIST has approved many SECG curves, so there is a significant overlap between the specifications published by NIST and SECG. EC domain parameters may be either specified by value or by name.
If one (despite the above) wants to construct one's own domain parameters, one should select the underlying field and then use one of the following strategies to find a curve with appropriate (i.e., near prime) number of points using one of the following methods:
Select a random curve and use a general point-counting algorithm, for example, Schoof's algorithm or the Schoof–Elkies–Atkin algorithm,
Select a random curve from a family which allows easy calculation of the number of points (e.g., Koblitz curves), or
Select the number of points and generate a curve with this number of points using the complex multiplication technique.
Several classes of curves are weak and should be avoided:
Curves over with non-prime m are vulnerable to Weil descent attacks.
Curves such that n divides (where p is the characteristic of the field: q for a prime field, or for a binary field) for sufficiently small B are vulnerable to Menezes–Okamoto–Vanstone (MOV) attack which applies usual discrete logarithm problem (DLP) in a small-degree extension field of to solve ECDLP. The bound B should be chosen so that discrete logarithms in the field are at least as difficult to compute as discrete logs on the elliptic curve .
Curves such that are vulnerable to the attack that maps the points on the curve to the additive group of .
Key sizes
Because all the fastest known algorithms that allow one to solve the ECDLP (baby-step giant-step, Pollard's rho, etc.), need steps, it follows that the size of the underlying field should be roughly twice the security parameter. For example, for 128-bit security one needs a curve over , where . This can be contrasted with finite-field cryptography (e.g., DSA) which requires 3072-bit public keys and 256-bit private keys, and integer factorization cryptography (e.g., RSA) which requires a 3072-bit value of n, where the private key should be just as large. However, the public key may be smaller to accommodate efficient encryption, especially when processing power is limited.
The hardest ECC scheme (publicly) broken to date had a 112-bit key for the prime field case and a 109-bit key for the binary field case. For the prime field case, this was broken in July 2009 using a cluster of over 200 PlayStation 3 game consoles and could have been finished in 3.5 months using this cluster when running continuously. The binary field case was broken in April 2004 using 2600 computers over 17 months.
A current project is aiming at breaking the ECC2K-130 challenge by Certicom, by using a wide range of different hardware: CPUs, GPUs, FPGA.
Projective coordinates
A close examination of the addition rules shows that in order to add two points, one needs not only several additions and multiplications in but also an inversion operation. The inversion (for given find such that ) is one to two orders of magnitude slower than multiplication. However, points on a curve can be represented in different coordinate systems which do not require an inversion operation to add two points. Several such systems were proposed: in the projective system each point is represented by three coordinates using the following relation: , ; in the Jacobian system a point is also represented with three coordinates , but a different relation is used: , ; in the López–Dahab system the relation is , ; in the modified Jacobian system the same relations are used but four coordinates are stored and used for calculations ; and in the Chudnovsky Jacobian system five coordinates are used . Note that there may be different naming conventions, for example, IEEE P1363-2000 standard uses "projective coordinates" to refer to what is commonly called Jacobian coordinates. An additional speed-up is possible if mixed coordinates are used.
Fast reduction (NIST curves)
Reduction modulo p (which is needed for addition and multiplication) can be executed much faster if the prime p is a pseudo-Mersenne prime, that is ; for example, or Compared to Barrett reduction, there can be an order of magnitude speed-up. The speed-up here is a practical rather than theoretical one, and derives from the fact that the moduli of numbers against numbers near powers of two can be performed efficiently by computers operating on binary numbers with bitwise operations.
The curves over with pseudo-Mersenne p are recommended by NIST. Yet another advantage of the NIST curves is that they use a = −3, which improves addition in Jacobian coordinates.
According to Bernstein and Lange, many of the efficiency-related decisions in NIST FIPS 186-2 are sub-optimal. Other curves are more secure and run just as fast.
Applications
Elliptic curves are applicable for encryption, digital signatures, pseudo-random generators and other tasks. They are also used in several integer factorization algorithms that have applications in cryptography, such as Lenstra elliptic-curve factorization.
In 1999, NIST recommended fifteen elliptic curves. Specifically, FIPS 186-4 has ten recommended finite fields:
Five prime fields for certain primes p of sizes 192, 224, 256, 384, and bits. For each of the prime fields, one elliptic curve is recommended.
Five binary fields for m equal 163, 233, 283, 409, and 571. For each of the binary fields, one elliptic curve and one Koblitz curve was selected.
The NIST recommendation thus contains a total of five prime curves and ten binary curves. The curves were ostensibly chosen for optimal security and implementation efficiency.
In 2013, The New York Times stated that Dual Elliptic Curve Deterministic Random Bit Generation (or Dual_EC_DRBG) had been included as a NIST national standard due to the influence of NSA, which had included a deliberate weakness in the algorithm and the recommended elliptic curve. RSA Security in September 2013 issued an advisory recommending that its customers discontinue using any software based on Dual_EC_DRBG. In the wake of the exposure of Dual_EC_DRBG as "an NSA undercover operation", cryptography experts have also expressed concern over the security of the NIST recommended elliptic curves, suggesting a return to encryption based on non-elliptic-curve groups.
Elliptic curve cryptography is used by the cryptocurrency Bitcoin.
Ethereum version 2.0 makes extensive use of elliptic curve pairs using BLS signatures—as specified in the IETF draft BLS specification—for cryptographically assuring that a specific Eth2 validator has actually verified a particular transaction.
Security
Side-channel attacks
Unlike most other DLP systems (where it is possible to use the same procedure for squaring and multiplication), the EC addition is significantly different for doubling (P = Q) and general addition (P ≠ Q) depending on the coordinate system used. Consequently, it is important to counteract side-channel attacks (e.g., timing or simple/differential power analysis attacks) using, for example, fixed pattern window (a.k.a. comb) methods (note that this does not increase computation time). Alternatively one can use an Edwards curve; this is a special family of elliptic curves for which doubling and addition can be done with the same operation. Another concern for ECC-systems is the danger of fault attacks, especially when running on smart cards.
Backdoors
Cryptographic experts have expressed concerns that the National Security Agency has inserted a kleptographic backdoor into at least one elliptic curve-based pseudo random generator. Internal memos leaked by former NSA contractor Edward Snowden suggest that the NSA put a backdoor in the Dual EC DRBG standard. One analysis of the possible backdoor concluded that an adversary in possession of the algorithm's secret key could obtain encryption keys given only 32 bytes of PRNG output.
The SafeCurves project has been launched in order to catalog curves that are easy to securely implement and are designed in a fully publicly verifiable way to minimize the chance of a backdoor.
Quantum computing attacks
Shor's algorithm can be used to break elliptic curve cryptography by computing discrete logarithms on a hypothetical quantum computer. The latest quantum resource estimates for breaking a curve with a 256-bit modulus (128-bit security level) are 2330 qubits and 126 billion Toffoli gates. In comparison, using Shor's algorithm to break the RSA algorithm requires 4098 qubits and 5.2 trillion Toffoli gates for a 2048-bit RSA key, suggesting that ECC is an easier target for quantum computers than RSA. All of these figures vastly exceed any quantum computer that has ever been built, and estimates place the creation of such computers at a decade or more away.
Supersingular Isogeny Diffie–Hellman Key Exchange provides a post-quantum secure form of elliptic curve cryptography by using isogenies to implement Diffie–Hellman key exchanges. This key exchange uses much of the same field arithmetic as existing elliptic curve cryptography and requires computational and transmission overhead similar to many currently used public key systems.
In August 2015, the NSA announced that it planned to transition "in the not distant future" to a new cipher suite that is resistant to quantum attacks. "Unfortunately, the growth of elliptic curve use has bumped up against the fact of continued progress in the research on quantum computing, necessitating a re-evaluation of our cryptographic strategy."
Invalid curve attack
When ECC is used in virtual machines, an attacker may use an invalid curve to get a complete PDH private key.
Patents
At least one ECC scheme (ECMQV) and some implementation techniques are covered by patents.
Alternative representations
Alternative representations of elliptic curves include:
Hessian curves
Edwards curves
Twisted curves
Twisted Hessian curves
Twisted Edwards curve
Doubling-oriented Doche–Icart–Kohel curve
Tripling-oriented Doche–Icart–Kohel curve
Jacobian curve
Montgomery curves
See also
Cryptocurrency
Curve25519
FourQ
DNSCurve
RSA (cryptosystem)
ECC patents
Elliptic curve Diffie-Hellman (ECDH)
Elliptic Curve Digital Signature Algorithm (ECDSA)
EdDSA
ECMQV
Elliptic curve point multiplication
Homomorphic Signatures for Network Coding
Hyperelliptic curve cryptography
Pairing-based cryptography
Public-key cryptography
Quantum cryptography
Supersingular isogeny key exchange
Notes
References
Standards for Efficient Cryptography Group (SECG), SEC 1: Elliptic Curve Cryptography, Version 1.0, September 20, 2000. (archived as if Nov 11, 2014)
D. Hankerson, A. Menezes, and S.A. Vanstone, Guide to Elliptic Curve Cryptography, Springer-Verlag, 2004.
I. Blake, G. Seroussi, and N. Smart, Elliptic Curves in Cryptography, London Mathematical Society 265, Cambridge University Press, 1999.
I. Blake, G. Seroussi, and N. Smart, editors, Advances in Elliptic Curve Cryptography, London Mathematical Society 317, Cambridge University Press, 2005.
L. Washington, Elliptic Curves: Number Theory and Cryptography, Chapman & Hall / CRC, 2003.
The Case for Elliptic Curve Cryptography, National Security Agency (archived January 17, 2009)
Online Elliptic Curve Cryptography Tutorial, Certicom Corp. (archived here as of March 3, 2016)
K. Malhotra, S. Gardner, and R. Patz, Implementation of Elliptic-Curve Cryptography on Mobile Healthcare Devices, Networking, Sensing and Control, 2007 IEEE International Conference on, London, 15–17 April 2007 Page(s):239–244
Saikat Basu, A New Parallel Window-Based Implementation of the Elliptic Curve Point Multiplication in Multi-Core Architectures, International Journal of Network Security, Vol. 13, No. 3, 2011, Page(s):234–241 (archived here as of March 4, 2016)
Christof Paar, Jan Pelzl, "Elliptic Curve Cryptosystems", Chapter 9 of "Understanding Cryptography, A Textbook for Students and Practitioners". (companion web site contains online cryptography course that covers elliptic curve cryptography), Springer, 2009. (archived here as of April 20, 2016)
Luca De Feo, David Jao, Jerome Plut, Towards quantum-resistant cryptosystems from supersingular elliptic curve isogenies, Springer 2011. (archived here as of May 7, 2012)
Jacques Vélu, Courbes elliptiques (...), Société Mathématique de France, 57, 1-152, Paris, 1978.
External links
Elliptic Curves at Stanford University
Elliptic curve cryptography
Public-key cryptography
Finite fields |
10294 | https://en.wikipedia.org/wiki/Encryption | Encryption | In cryptography, encryption is the process of encoding information. This process converts the original representation of the information, known as plaintext, into an alternative form known as ciphertext. Ideally, only authorized parties can decipher a ciphertext back to plaintext and access the original information. Encryption does not itself prevent interference but denies the intelligible content to a would-be interceptor.
For technical reasons, an encryption scheme usually uses a pseudo-random encryption key generated by an algorithm. It is possible to decrypt the message without possessing the key but, for a well-designed encryption scheme, considerable computational resources and skills are required. An authorized recipient can easily decrypt the message with the key provided by the originator to recipients but not to unauthorized users.
Historically, various forms of encryption have been used to aid in cryptography. Early encryption techniques were often used in military messaging. Since then, new techniques have emerged and become commonplace in all areas of modern computing. Modern encryption schemes use the concepts of public-key and symmetric-key. Modern encryption techniques ensure security because modern computers are inefficient at cracking the encryption.
History
Ancient
One of the earliest forms of encryption is symbol replacement, which was first found in the tomb of Khnumhotep II, who lived in 1900 BC Egypt. Symbol replacement encryption is “non-standard,” which means that the symbols require a cipher or key to understand. This type of early encryption was used throughout Ancient Greece and Rome for military purposes. One of the most famous military encryption developments was the Caesar Cipher, which was a system in which a letter in normal text is shifted down a fixed number of positions down the alphabet to get the encoded letter. A message encoded with this type of encryption could be decoded with the fixed number on the Caesar Cipher.
Around 800 AD, Arab mathematician Al-Kindi developed the technique of frequency analysis – which was an attempt to systematically crack Caesar ciphers. This technique looked at the frequency of letters in the encrypted message to determine the appropriate shift. This technique was rendered ineffective after the creation of the Polyalphabetic cipher by Leone Alberti in 1465, which incorporated different sets of languages. In order for frequency analysis to be useful, the person trying to decrypt the message would need to know which language the sender chose.
19th–20th century
Around 1790, Thomas Jefferson theorised a cipher to encode and decode messages in order to provide a more secure way of military correspondence. The cipher, known today as the Wheel Cipher or the Jefferson Disk, although never actually built, was theorized as a spool that could jumble an English message up to 36 characters. The message could be decrypted by plugging in the jumbled message to a receiver with an identical cipher.
A similar device to the Jefferson Disk, the M-94, was developed in 1917 independently by US Army Major Joseph Mauborne. This device was used in U.S. military communications until 1942.
In World War II, the Axis powers used a more advanced version of the M-94 called the Enigma Machine. The Enigma Machine was more complex because unlike the Jefferson Wheel and the M-94, each day the jumble of letters switched to a completely new combination. Each day's combination was only known by the Axis, so many thought the only way to break the code would be to try over 17,000 combinations within 24 hours. The Allies used computing power to severely limit the number of reasonable combinations they needed to check every day, leading to the breaking of the Enigma Machine.
Modern
Today, encryption is used in the transfer of communication over the Internet for security and commerce. As computing power continues to increase, computer encryption is constantly evolving to prevent attacks.
Encryption in cryptography
In the context of cryptography, encryption serves as a mechanism to ensure confidentiality. Since data may be visible on the Internet, sensitive information such as passwords and personal communication may be exposed to potential interceptors. The process of encrypting and decrypting messages involves keys. The two main types of keys in cryptographic systems are symmetric-key and public-key (also known as asymmetric-key).
Many complex cryptographic algorithms often use simple modular arithmetic in their implementations.
Types
Symmetric key
In symmetric-key schemes, the encryption and decryption keys are the same. Communicating parties must have the same key in order to achieve secure communication. The German Enigma Machine utilized a new symmetric-key each day for encoding and decoding messages.
Public key
In public-key encryption schemes, the encryption key is published for anyone to use and encrypt messages. However, only the receiving party has access to the decryption key that enables messages to be read. Public-key encryption was first described in a secret document in 1973; beforehand, all encryption schemes were symmetric-key (also called private-key). Although published subsequently, the work of Diffie and Hellman was published in a journal with a large readership, and the value of the methodology was explicitly described. The method became known as the Diffie-Hellman key exchange.
RSA (Rivest–Shamir–Adleman) is another notable public-key cryptosystem. Created in 1978, it is still used today for applications involving digital signatures. Using number theory, the RSA algorithm selects two prime numbers, which help generate both the encryption and decryption keys.
A publicly available public-key encryption application called Pretty Good Privacy (PGP) was written in 1991 by Phil Zimmermann, and distributed free of charge with source code. PGP was purchased by Symantec in 2010 and is regularly updated.
Uses
Encryption has long been used by militaries and governments to facilitate secret communication. It is now commonly used in protecting information within many kinds of civilian systems. For example, the Computer Security Institute reported that in 2007, 71% of companies surveyed utilized encryption for some of their data in transit, and 53% utilized encryption for some of their data in storage. Encryption can be used to protect data "at rest", such as information stored on computers and storage devices (e.g. USB flash drives). In recent years, there have been numerous reports of confidential data, such as customers' personal records, being exposed through loss or theft of laptops or backup drives; encrypting such files at rest helps protect them if physical security measures fail. Digital rights management systems, which prevent unauthorized use or reproduction of copyrighted material and protect software against reverse engineering (see also copy protection), is another somewhat different example of using encryption on data at rest.
Encryption is also used to protect data in transit, for example data being transferred via networks (e.g. the Internet, e-commerce), mobile telephones, wireless microphones, wireless intercom systems, Bluetooth devices and bank automatic teller machines. There have been numerous reports of data in transit being intercepted in recent years. Data should also be encrypted when transmitted across networks in order to protect against eavesdropping of network traffic by unauthorized users.
Data erasure
Conventional methods for permanently deleting data from a storage device involve overwriting the device's whole content with zeros, ones, or other patterns – a process which can take a significant amount of time, depending on the capacity and the type of storage medium. Cryptography offers a way of making the erasure almost instantaneous. This method is called crypto-shredding. An example implementation of this method can be found on iOS devices, where the cryptographic key is kept in a dedicated 'effaceable storage'. Because the key is stored on the same device, this setup on its own does not offer full privacy or security protection if an unauthorized person gains physical access to the device.
Limitations
Encryption is used in the 21st century to protect digital data and information systems. As computing power increased over the years, encryption technology has only become more advanced and secure. However, this advancement in technology has also exposed a potential limitation of today's encryption methods.
The length of the encryption key is an indicator of the strength of the encryption method. For example, the original encryption key, DES (Data Encryption Standard), was 56 bits, meaning it had 2^56 combination possibilities. With today's computing power, a 56-bit key is no longer secure, being vulnerable to hacking by brute force attack. Today the standard of modern encryption keys is up to 2048 bit with the RSA system. Decrypting a 2048 bit encryption key is nearly impossible in light of the number of possible combinations. However, quantum computing is threatening to change this secure nature.
Quantum computing utilizes properties of quantum mechanics in order to process large amounts of data simultaneously. Quantum computing has been found to achieve computing speeds thousands of times faster than today's supercomputers. This computing power presents a challenge to today's encryption technology. For example, RSA encryption utilizes the multiplication of very large prime numbers to create a semiprime number for its public key. Decoding this key without its private key requires this semiprime number to be factored in, which can take a very long time to do with modern computers. It would take a supercomputer anywhere between weeks to months to factor in this key. However, quantum computing can use quantum algorithms to factor this semiprime number in the same amount of time it takes for normal computers to generate it. This would make all data protected by current public-key encryption vulnerable to quantum computing attacks. Other encryption techniques like elliptic curve cryptography and symmetric key encryption are also vulnerable to quantum computing.
While quantum computing could be a threat to encryption security in the future, quantum computing as it currently stands is still very limited. Quantum computing currently is not commercially available, cannot handle large amounts of code, and only exists as computational devices, not computers. Furthermore, quantum computing advancements will be able to be utilized in favor of encryption as well. The National Security Agency (NSA) is currently preparing post-quantum encryption standards for the future. Quantum encryption promises a level of security that will be able to counter the threat of quantum computing.
Attacks and countermeasures
Encryption is an important tool but is not sufficient alone to ensure the security or privacy of sensitive information throughout its lifetime. Most applications of encryption protect information only at rest or in transit, leaving sensitive data in clear text and potentially vulnerable to improper disclosure during processing, such as by a cloud service for example. Homomorphic encryption and secure multi-party computation are emerging techniques to compute on encrypted data; these techniques are general and Turing complete but incur high computational and/or communication costs.
In response to encryption of data at rest, cyber-adversaries have developed new types of attacks. These more recent threats to encryption of data at rest include cryptographic attacks, stolen ciphertext attacks, attacks on encryption keys, insider attacks, data corruption or integrity attacks, data destruction attacks, and ransomware attacks. Data fragmentation and active defense data protection technologies attempt to counter some of these attacks, by distributing, moving, or mutating ciphertext so it is more difficult to identify, steal, corrupt, or destroy.
Integrity protection of ciphertexts
Encryption, by itself, can protect the confidentiality of messages, but other techniques are still needed to protect the integrity and authenticity of a message; for example, verification of a message authentication code (MAC) or a digital signature. Authenticated encryption algorithms are designed to provide both encryption and integrity protection together. Standards for cryptographic software and hardware to perform encryption are widely available, but successfully using encryption to ensure security may be a challenging problem. A single error in system design or execution can allow successful attacks. Sometimes an adversary can obtain unencrypted information without directly undoing the encryption. See for example traffic analysis, TEMPEST, or Trojan horse.
Integrity protection mechanisms such as MACs and digital signatures must be applied to the ciphertext when it is first created, typically on the same device used to compose the message, to protect a message end-to-end along its full transmission path; otherwise, any node between the sender and the encryption agent could potentially tamper with it. Encrypting at the time of creation is only secure if the encryption device itself has correct keys and has not been tampered with. If an endpoint device has been configured to trust a root certificate that an attacker controls, for example, then the attacker can both inspect and tamper with encrypted data by performing a man-in-the-middle attack anywhere along the message's path. The common practice of TLS interception by network operators represents a controlled and institutionally sanctioned form of such an attack, but countries have also attempted to employ such attacks as a form of control and censorship.
Ciphertext length and padding
Even when encryption correctly hides a message's content and it cannot be tampered with at rest or in transit, a message's length is a form of metadata that can still leak sensitive information about the message. For example, the well-known CRIME and BREACH attacks against HTTPS were side-channel attacks that relied on information leakage via the length of encrypted content. Traffic analysis is a broad class of techniques that often employs message lengths to infer sensitive implementation about traffic flows by aggregating information about a large number of messages.
Padding a message's payload before encrypting it can help obscure the cleartext's true length, at the cost of increasing the ciphertext's size and introducing or increasing bandwidth overhead. Messages may be padded randomly or deterministically, with each approach having different tradeoffs. Encrypting and padding messages to form padded uniform random blobs or PURBs is a practice guaranteeing that the cipher text leaks no metadata about its cleartext's content, and leaks asymptotically minimal information via its length.
See also
Cryptosystem
Cold boot attack
Cyberspace Electronic Security Act (US)
Dictionary attack
Disk encryption
Encrypted function
Export of cryptography
Geo-blocking
Indistinguishability obfuscation
Key management
Multiple encryption
Physical Layer Encryption
Rainbow table
Rotor machine
Substitution cipher
Television encryption
Tokenization (data security)
References
Further reading
Kahn, David (1967), The Codebreakers - The Story of Secret Writing ()
Preneel, Bart (2000), "Advances in Cryptology - EUROCRYPT 2000", Springer Berlin Heidelberg,
Sinkov, Abraham (1966): Elementary Cryptanalysis: A Mathematical Approach, Mathematical Association of America.
Tenzer, Theo (2021): SUPER SECRETO – The Third Epoch of Cryptography: Multiple, exponential, quantum-secure and above all, simple and practical Encryption for Everyone, Norderstedt, ISBN 9783755761174.
Cryptography
Data protection |
10384 | https://en.wikipedia.org/wiki/Boeing%20E-3%20Sentry | Boeing E-3 Sentry | The Boeing E-3 Sentry is an American airborne early warning and control (AEW&C) aircraft developed by Boeing. E-3s are commonly known as AWACS (Airborne Warning and Control System). Derived from the Boeing 707 airliner, it provides all-weather surveillance, command, control, and communications, and is used by the United States Air Force, NATO, French Air and Space Force, and Royal Saudi Air Force. The E-3 is distinguished by the distinctive rotating radar dome (rotodome) above the fuselage. Production ended in 1992 after 68 aircraft had been built.
In the mid-1960s, the U.S. Air Force (USAF) was seeking an aircraft to replace its piston-engined Lockheed EC-121 Warning Star, which had been in service for over a decade. After issuing preliminary development contracts to three companies, the USAF picked Boeing to construct two airframes to test Westinghouse Electric and Hughes's competing radars. Both radars used pulse-Doppler technology, with Westinghouse's design emerging as the contract winner. Testing on the first production E-3 began in October 1975.
The first USAF E-3 was delivered in March 1977, and during the next seven years, a total of 34 aircraft were manufactured. E-3s were also purchased by NATO (18), the United Kingdom (7), France (4) and Saudi Arabia (5).
In 1991, when the last aircraft had been delivered, E-3s participated in the Persian Gulf War, playing a crucial role of directing coalition aircraft against Iraqi forces. The aircraft's capabilities have been maintained and enhanced through numerous upgrades. In 1996, Westinghouse Electric's Defense & Electronic Systems division was acquired by Northrop Corporation, before being renamed Northrop Grumman Mission Systems, which currently supports the E-3's radar.
Development
Background
In 1963, the USAF asked for proposals for an Airborne Warning and Control System (AWACS) to replace its EC-121 Warning Stars, which had served in the airborne early warning role for over a decade. The new aircraft would take advantage of improvements in radar technology and computer-aided radar data analysis and data reduction. These developments allowed airborne radars to "look down", i.e. to detect the movement of low-flying aircraft, and discriminate, even over land, target aircraft's movements; previously this had been impossible due to the inability to discriminate an aircraft's track from ground clutter. Contracts were issued to Boeing, Douglas, and Lockheed, the latter being eliminated in July 1966. In 1967, a parallel program was put into place to develop the radar, with Westinghouse Electric Corporation and Hughes Aircraft being asked to compete in producing the radar system. In 1968, it was referred to as Overland Radar Technology (ORT) during development tests on the modified EC-121Q. The Westinghouse radar antenna was going to be used by whichever company won the radar competition since Westinghouse had pioneered the design of high-power radio frequency (RF) phase-shifters, which are used to both focus the RF into a pencil beam and scan electronically for altitude determination.
Boeing initially proposed a purpose-built aircraft, but tests indicated it would not outperform the already-operational 707, so the latter was chosen instead. To increase endurance, this design was to be powered by eight General Electric TF34s. It would carry its radar in a rotating dome mounted at the top of a forward-swept tail, above the fuselage. Boeing was selected ahead of McDonnell Douglas's DC-8-based proposal in July 1970. Initial orders were placed for two aircraft, designated EC-137D, as test beds to evaluate the two competing radars. As the test beds did not need the same 14-hour endurance demanded of the production aircraft, the EC-137s retained the Pratt & Whitney JT3D commercial engines, and a later reduction in the endurance requirement led to retention of the JT3D engines in production.
The first EC-137 made its maiden flight on 9 February 1972, with the fly-off between the two radars taking place from March to July of that year. Favorable test results led to the selection of Westinghouse's radar for the production aircraft. Hughes' radar was initially thought to be a certain winner due to its related development of the APG-63 radar for the new F-15 Eagle. The Westinghouse radar used a pipelined fast Fourier transform (FFT) to digitally resolve 128 Doppler frequencies, while Hughes's radars used analog filters based on the design for the F-15. Westinghouse's engineering team won this competition by using a programmable 18-bit computer whose software could be modified before each mission. This computer was the AN/AYK-8 design from the B-57G program, and designated AYK-8-EP1 for its much expanded memory. This radar also multiplexed a beyond-the-horizon (BTH) pulse mode that could complement the pulse-Doppler radar mode. This proved to be beneficial especially when the BTH mode is used to detect ships at sea when the radar beam is directed below the horizon.
Full-scale development
Approval was given on 26 January 1973 for the full-scale development of the AWACS system. To allow further development of the aircraft's systems, orders were placed for three preproduction aircraft, the first of which performed its maiden flight in February 1975. IBM and Hazeltine were selected to develop the mission computer and display system. The IBM computer was designated 4PI, and the software was written in JOVIAL. A Semi-Automatic Ground Environment (SAGE) or back-up interceptor control (BUIC) operator would immediately be at home with the track displays and tabular displays, but differences in symbology would create compatibility problems in tactical ground radar systems in Iceland, mainland Europe, and South Korea over Link-11 (TADIL-A). In 1977, Iran placed an order for ten E-3s, however this order was cancelled following the Iranian Revolution.
Engineering, test and evaluation began on the first E-3 Sentry in October 1975. Between 1977 and 1992, a total of 68 E-3s were built.
Future status
Because the Boeing 707 is no longer in production, the E-3 mission package has been fitted into the Boeing E-767 for the Japan Air Self Defense Forces. The E-10 MC2A was intended to replace USAF E-3s—along with the RC-135 and the E-8 Joint STARS, but the program was canceled by the Department of Defense.
NATO intends to extend the operational status of its AWACS until 2035 when it is due to be replaced by the Alliance Future Surveillance and Control (AFSC) program. The Royal Air Force (RAF) chose to limit investment in its E-3D fleet in the early 2000s, diverting Sentry upgrade funds to a replacement program. On 22 March 2019, the UK Defence Secretary announced a $1.98 billion contract to purchase five E-7 Wedgetails.
Design
Overview
The E-3 Sentry's airframe is a modified Boeing 707-320B Advanced model. Modifications include a rotating radar dome (rotodome), uprated hydraulics from 241 to 345 bar (3500–5000 PSI) to drive the rotodome, single-point ground refueling, air refueling, and a bail-out tunnel or chute. A second bail-out chute was deleted to cut mounting costs.
USAF and NATO E-3s have an unrefueled range of or 8 hours of flying. The newer E-3 versions bought by France, Saudi Arabia, and the UK are equipped with newer CFM56-2 turbofan engines, and these can fly for about 11 hours or more than . The Sentry's range and on-station time can be increased through air-to-air refueling and the crews can work in shifts by the use of an on-board crew rest and meals area. The aircraft are equipped with one toilet in the rear, and a urinal behind the cockpit. Saudi E-3s were delivered with an additional toilet in the rear.
When deployed, the E-3 monitors an assigned area of the battlefield and provides information for commanders of air operations to gain and maintain control of the battle; while as an air defense asset, E-3s can detect, identify, and track airborne enemy forces far from the boundaries of the U.S. or NATO countries and can direct interceptor aircraft to these targets. In support of air-to-ground operations, the E-3 can provide direct information needed for interdiction, reconnaissance, airlift, and close-air support for friendly ground forces.
Avionics
The unpressurized rotodome is in diameter, thick at the center, and is held above the fuselage by 2 struts. It is tilted down at the front to reduce its aerodynamic drag, which lessens its detrimental effect on take-offs and endurance. This tilt is corrected electronically by both the radar and secondary surveillance radar antenna phase shifters. The rotodome uses bleed air, outside cooling doors, and fluorocarbon-based cold plate cooling to maintain the electronic and mechanical equipment temperatures. The hydraulically rotated antenna system permits the and AN/APY-2 passive electronically scanned array radar system to provide surveillance from the Earth's surface up into the stratosphere, over land or water.
Other major subsystems in the E-3 Sentry are navigation, communications, and computers. 14 consoles display computer-processed data in graphic and tabular format on screens. Its operators perform surveillance, identification, weapons control, battle management and communications functions. Data may be forwarded in real-time to any major command and control center in rear areas or aboard ships. In times of crisis, data may also be forwarded to the National Command Authority in the U.S. via RC-135 or aircraft carrier task forces.
Electrical generators mounted in each of the E-3's four engines provide 1 megawatt of electrical power required by the aircraft's radars and electronics. Its pulse-Doppler radar has a range of more than 250 mi (400 km) for low-flying targets at its operating altitude, and the pulse (BTH) radar has a range of approximately 400 mi (650 km) for aircraft flying at medium to high altitudes. The radar, combined with a secondary surveillance radar (SSR) and electronic support measures (ESM), provides a look down capability, to detect, identify, and track low-flying aircraft, while eliminating ground clutter returns.
Upgrades
Between 1987 and 2001, USAF E-3s were upgraded under the "Block 30/35 Modification Program". Enhancements included:
The installation of ESM and an electronic surveillance capability, for both active and passive means of detection.
Installation of the Joint Tactical Information Distribution System (JTIDS), which provides rapid and secure communication for transmitting information, including target positions and identification data, to other friendly platforms.
Global Positioning System (GPS) capability was added.
Onboard computers were overhauled to accommodate JTIDS, Link-16, the new ESM systems and to allow for future enhancements.
RSIP
The Radar System Improvement Program (RSIP) was a joint US/NATO development program. RSIP enhances the operational capability of the E-3 radars' electronic countermeasures, and improves the system's reliability, maintainability, and availability. Essentially, this program replaced the older transistor-transistor logic (TTL) and emitter-coupled logic (MECL) electronic components, long-since out of production, with off-the-shelf computers that utilised a High-level programming language instead of assembly language. Significant improvement came from adding pulse compression to the pulse-Doppler mode. These hardware and software modifications improve the E-3 radars' performance, providing enhanced detection with an emphasis towards low radar cross-section (RCS) targets.
The RAF had also joined the USAF in adding RSIP to upgrade the E-3's radars. The retrofitting of the E-3 squadrons was completed in December 2000. Along with the RSIP upgrade was installation of the Global Positioning System/Inertial Navigation Systems which improved positioning accuracy. In 2002, Boeing was awarded a contract to add RSIP to the small French AWACS squadron. Installation was completed in 2006. Saudi Arabia began RSIP upgrades in 2013; the first aircraft being upgraded by Boeing in Seattle with the four remaining aircraft upgraded in Riyadh between 2014 and 2016.
NATO Mid Term Program
Between 2000 and 2008 NATO upgraded its E-3s to Mid Term Program (MTP) standard. This involved technical upgrades and a total multi-sensor-systems integration
DRAGON
In 2009, the USAF, in cooperation with NATO, entered into a major flight deck avionics modernization program in order to maintain compliance with worldwide airspace mandates. The program, called DRAGON (for DMS Replacement of Avionics for Global Operation and Navigation), was awarded to Boeing and Rockwell Collins in 2010. Drawing on their Flight2 flight management system (FMS), almost all the avionics were replaced with more modern digital equipment from Rockwell Collins. Main upgrades include a Digital Audio Distribution System, Mode-5/ADS-B transponder, Inmarsat and VDL datalinks, and a terrain awareness and warning system (TAWS). The centerpiece flight deck hardware consists of five 6x8 color graphics displays and two color CDUs. DRAGON laid the foundation for subsequent upgrades including GPS M-Code, Iridium ATC, and Autopilot. USAF DRAGON Production began in 2018.
USAF Block 40/45
In 2014 the USAF began upgrading block 30/35 E-3B/Cs into block 40/45 E-3Gs. This upgrade replaces the main flight computer with a Red Hat Linux-based system, as well as replacing the DOS 2.0-like operating system with a Windows 95-like system on the operator workstations. In 2016, a three-week long cybersecurity vulnerability test revealed that the 40/45 block and its supporting ground equipment were vulnerable to cyber threats, and were thus deemed "not survivable." This caused a delay of approximately two years. Twenty-four E-3s are projected to complete this upgrade to 40/45 by the end of fiscal year 2020, while seven aircraft will be retired to save upgrade costs and harvest out-of-production components.
NATO Final Lifetime Extension Program
NATO intends to extend the operational status of its AWACS until 2035 by significantly upgrading fourteen aircraft in the Final Lifetime Extension Program (FLEP) between 2019 and 2026. Upgrades include the expansion of data capacity, expansion of bandwidth for satellite communications, new encryption equipment, new HAVE QUICK radios, upgraded mission computing software and new operator consoles. The supporting ground systems (mission training center and mission planning and evaluation system) will also be upgraded to the latest standard. NATO Airborne Early Warning & Control Program Management Agency (NAPMA) is the preparing and executing authority for the FLEP. FLEP will be combined with the standard planned higher echelon technical maintenance.
Operational history
In March 1977 the 552nd Airborne Warning and Control Wing received the first E-3 aircraft at Tinker AFB, Oklahoma. The 34th and last USAF Sentry was delivered in June 1984. The USAF has a total of thirty-one E-3s in active service. Twenty-seven are stationed at Tinker AFB and belong to the Air Combat Command (ACC). Four are assigned to the Pacific Air Forces (PACAF) and stationed at Kadena AB, Okinawa and Elmendorf AFB, Alaska. One aircraft (TS-3) was assigned to Boeing for testing and development (retired/scrapped June 2012).
E-3 Sentry aircraft were among the first to deploy during Operation Desert Shield, where they established a radar screen to monitor Iraqi forces. During Operation Desert Storm, E-3s flew 379 missions and logged 5,052 hours of on-station time. The data collection capability of the E-3 radar and computer subsystems allowed an entire air war to be recorded for the first time. In addition to providing senior leadership with time-critical information on the actions of enemy forces, E-3 controllers assisted in 38 of the 41 air-to-air kills recorded during the conflict.
NATO, UK, French and USAF AWACS played an important role in the air campaign against Serbia and Montenegro in the former republic of FR Yugoslavia. From March to June 1999 the aircraft were deployed in the NATO bombing of Yugoslavia (operation Allied Force) directing allied strike and air defence aircraft to and from their targets. Over 1,000 aircraft operating from bases in Germany and Italy took part in the air campaign which was intended to destroy Yugoslav air defenses and high-value targets such as the bridges across the Danube river, factories, power stations, telecommunications facilities, and military installations.
On 18 November 2015, an E-3G was deployed to the Middle East to begin flying combat missions in support of Operation Inherent Resolve against ISIL, marking the first combat deployment of the upgraded Block 40/45 aircraft.
France and United Kingdom
In February 1987 the UK and France ordered E-3 aircraft in a joint project which saw deliveries start in 1991. The British requirement arose due to the cancellation of the BAE Nimrod AEW3 project. While France operates its E-3F aircraft independently of NATO, UK E-3Ds formed the E-3D Component of the NATO Airborne Early Warning and Control Force (NAEWCF), receiving much of their tasking directly from NATO. However, RAF E-3Ds remain UK manned and capable of independent, national tasking. This has been done on numerous occasions, notably when E-3Ds were committed to operations over Afghanistan in 2001 and Iraq in 2003.
The UK fleet has slowly been reduced from 7 since 2011. In 2009, the UK effectively limited the service life of the E-3D fleet by de-funding the Project Eagle upgrade which would have seen it upgraded in line with the USAF Block 40/45 standard. AirForces Monthly reported that by December 2020, just 2 aircraft were available for operations at any one time. The Strategic Defence and Security Review 2015 had announced the intention to retain the E-3D fleet until 2035, however in March 2019, the Ministry of Defence announced that the E-3Ds would be replaced by five E-7 Wedgetails from 2023. The £1.51 billion contract was awarded to Boeing without a competitive procurement process, a decision criticised by both competitors of Boeing and the UK's Defence Select Committee. The 2021 Integrated Defence Review confirmed a reduced order of three aircraft. On 27 January 2015, the RAF deployed an E-3D Sentry to Cyprus in support of U.S.-led coalition airstrikes against Islamic State militants in Iraq and Syria. The last intended operational flight by an RAF E3 Sentry was supposed to be in July 2021 with the Sentry retired from service, however RAF ZH101 and ZH106 have both flown patrols over Poland / Eastern Europe during Russia's incursions into Ukraine during late February / March 2022.
France operates four aircraft, all fitted with the newer CFM56-2 engines.
NATO
NATO acquired 18 E-3As and support equipment, with the first aircraft delivered in January 1982. The aircraft are registered in Luxembourg. The eighteen E-3s were operated by Number 1, 2 and 3 Squadrons of NATO's E-3 Component, based at NATO Air Base Geilenkirchen.
NATO E-3s participated in Operation Eagle Assist after the September 11 attacks on the World Trade Center towers and the Pentagon. NATO and RAF E-3s participated in the military intervention in Libya.
Presently, 16 NATO E-3As are in the inventory, since one E-3 was lost in a crash and one was retired from service in 2015. The latter was due for its six-year cycle Depot Level Maintenance (DLM) inspection which would have been very costly. The "449 Retirement Project" resulted in reclamation of critical parts with a value of upwards of $40 million which will be used to support the 16 active aircraft. Some of the parts to be removed are no longer on the market or have become very expensive.
Variants
EC-137D
2 prototype AWACS aircraft with JT3D engines, 1 fitted with a Westinghouse Electric radar and 1 with a Hughes Aircraft Company radar. Both converted to E-3A standard with TF33 engines.
E-3A
Production aircraft with TF33 engines and AN/APY-1 radar, 24 built for USAF (later converted to E-3B standard), total of 34 ordered but the last 9 completed as E-3C. One additional aircraft retained by Boeing for testing, 18 built for NATO with TF33 engines and 5 for Saudi Arabia with CFM56 engines.
KE-3A
These are not AWACS aircraft but CFM56 powered tankers based on the E-3 design. 8 were sold to Saudi Arabia.
E-3B
USAF Block 30 modification. E-3As with improvements, 24 conversions.
E-3C
USAF Block 35 modification. Production aircraft with AN/APY-2 radar, additional electronic consoles and system improvements, ten built.
JE-3C
One E-3A aircraft used by Boeing for trials later redesignated E-3C.
E-3D
Production aircraft for the RAF to E-3C standard with CFM56 engines and British modifications designated Sentry AEW.1, 7 built. Modifications included the addition of a refuelling probe next to the existing boom AAR recipticle, CFM-56 engines, wingtip ESM pods, an enhanced Maritime Surveillance Capability (MSC) offering Maritime Scan-Scan Processing (MSSP), JTIDS and Havequick 2 radios.
E-3F
Production aircraft for the French Air and Space Force to E-3C standard with CFM56 engines and French modifications, 4 built.
E-3G
USAF Block 40/45 modification. Includes hardware and software upgrades to improve communications, computer processing power, threat tracking, and others, and automates some previously manual functions. Initial operating capability (IOC) declared in July 2015.
Operators
The Chilean Air Force purchased three second-hand E-3D Sentry aicraft from the Royal Air Force.
The French Air and Space Force purchased four E-3F aircraft.
Escadron de détection et de contrôle aéroportés 36 Berry (36th Airborne Detection and Control Squadron "Berry") based at Avord Air Base.
18 E-3 AWACS were purchased – 1 was written off in Greece, 3 were retired from service. Mainly responsible for monitoring European NATO airspace, they have also been deployed outside the area in support of NATO commitments. The 20 multinational crews are provided by 15 of the 28 NATO member states.
NATO Airborne Early Warning and Control Force – E-3A Component. Based at Geilenkirchen (Germany), with forward operating bases at Konya (Turkey), Preveza/Aktion (Greece) and Trapani/Birgi (Italy) and a forward operating location at Ørland (Norway).
Aircrew Training Squadron
Flying Squadron 1
Flying Squadron 2
Flying Squadron 3 – disbanded in 2015
The Royal Saudi Air Force purchased five E-3A aircraft in 1983. In 2004, modifications began to convert KE-3A tankers into RE-3 electronic intelligence gathering aircraft.
RSAF No. 6 Wing (Prince Sultan Air Base – Al Kharj)
السرب الثامن عشر (al-Sarab al-Ththamin Eshr – No. 18 Squadron)
No. 19 Squadron – RE-3A/B (as well as Beechcraft 350ER-ISR)
No. 23 Squadron – KE-3A
The Royal Air Force purchased seven E-3Ds by October 1987, designated Sentry AEW.1 in British service. As of December 2020, only three remained in service after one was withdrawn from service in 2009 to be used as spares, two were withdrawn in March 2019 and a further one withdrawn in January 2020. The fleet had been given an out of service date (OSD) of December 2022. They form the E-3D Component of the NATO Airborne Early Warning and Control Force. However, that date was accelerated pursuant to the 2021 defence review and the aircraft made its final flight in U.K. service in August 2021.
RAF Waddington, Lincolnshire, England
No. 8 Squadron (1991–2021)
No. 23 Squadron (1996–2009)
No. 54 Squadron (Operational Conversion Unit 2005–?)
No. 56 Squadron (Operational Evaluation Unit 2008–?)
The United States Air Force has 31 operational E-3s as of December 2019
Tactical Air Command 1976–1992
Air Combat Command 1992–present
552d Air Control Wing – Tinker Air Force Base, Oklahoma
960th Airborne Air Control Squadron 2001–present (NAS Keflavik, Iceland 1979–1992)
963d Airborne Air Control Squadron 1976–present
964th Airborne Air Control Squadron 1977–present
965th Airborne Air Control Squadron 1978–1979, 1984–present
966th Airborne Air Control Squadron 1976–present
380th Air Expeditionary Wing – Al Dhafra Air Base, United Arab Emirates
968th Expeditionary Airborne Air Control Squadron 2013–present (Thumrait Air Base, Oman 2002–2003)
Air Force Reserve Command
513th Air Control Group (Associate) – Tinker AFB, Oklahoma
970th Airborne Air Control Squadron 1996–present (Personnel only, aircraft loaned by co-located 552nd ACW as needed)
413th Flight Test Group – Robins AFB, Georgia
10th Flight Test Squadron – Tinker AFB, Oklahoma 1994–present
Pacific Air Forces
3d Wing – Elmendorf AFB, Alaska
962d Airborne Air Control Squadron 1986–present
18th Wing – Kadena AB, Japan
961st Airborne Air Control Squadron 1979–present
Incidents and accidents
E-3s have been involved in three hull-loss accidents, and one radar antenna was destroyed during RSIP development (see photo under Avionics).
On 22 September 1995, a U.S. Air Force E-3 Sentry (callsign Yukla 27, serial number 77-0354), crashed shortly after takeoff from Elmendorf AFB, Alaska. The plane lost power to both left side engines after ingesting several Canada geese during takeoff. The aircraft went down about northeast of the runway, killing all 24 crew members on board.
On 14 July 1996, a NATO E-3 Sentry (tail number LX-N90457) overran the runway and crashed into a sea wall at Préveza-Aktion Airport in Greece when the pilot attempted to abort takeoff after mistakenly believing that the aircraft had suffered a bird strike. The aircraft overran the runway and struck a sea wall, where it came to a halt. There were no injuries and the aircraft was written off. Investigators could find no evidence that a bird strike and ingestion had occurred.
On 28 August 2009, a U.S. Air Force E-3C Sentry (serial number 83-0008) participating in a Red Flag exercise at Nellis Air Force Base, Nevada experienced a nose gear collapse on landing, resulting in a fire and damaging the aircraft beyond repair. All 32 crew members evacuated safely.
Specifications (USAF/NATO)
See also
References
Notes
Citations
Bibliography
External links
Royal Air Force E-3 Sentry information
NATO AWACS-Spotter Geilenkirchen website
Airborne Early Warning Association website
Aircraft first flown in 1972
AWACS aircraft
E-03 Sentry
Boeing 707
E-03 Sentry
Quadjets
1970s United States military reconnaissance aircraft |
10772 | https://en.wikipedia.org/wiki/Fair%20use | Fair use | Fair use is a doctrine in United States law that permits limited use of copyrighted material without having to first acquire permission from the copyright holder. Fair use is one of the limitations to copyright intended to balance the interests of copyright holders with the public interest in the wider distribution and use of creative works by allowing as a defense to copyright infringement claims certain limited uses that might otherwise be considered infringement. Unlike "fair dealing" rights that exist in most countries with a British legal history, the fair use right is a general exception that applies to all different kinds of uses with all types of works and turns on a flexible proportionality test that examines the purpose of the use, the amount used, and the impact on the market of the original work.
The doctrine of "fair use" originated in the Anglo-American common law during the 18th and 19th centuries as a way of preventing copyright law from being too rigidly applied and "stifling the very creativity which [copyright] law is designed to foster." Though originally a common law doctrine, it was enshrined in statutory law when the U.S. Congress passed the Copyright Act of 1976. The U.S. Supreme Court has issued several major decisions clarifying and reaffirming the fair use doctrine since the 1980s, most recently in the 2021 decision Google LLC v. Oracle America, Inc.
History
The 1710 Statute of Anne, an act of the Parliament of Great Britain, created copyright law to replace a system of private ordering enforced by the Stationers' Company. The Statute of Anne did not provide for legal unauthorized use of material protected by copyright. In Gyles v Wilcox, the Court of Chancery established the doctrine of "fair abridgement", which permitted unauthorized abridgement of copyrighted works under certain circumstances. Over time, this doctrine evolved into the modern concepts of fair use and fair dealing. Fair use was a common-law doctrine in the U.S. until it was incorporated into the Copyright Act of 1976, .
The term "fair use" originated in the United States. Although related, the limitations and exceptions to copyright for teaching and library archiving in the U.S. are located in a different section of the statute. A similar-sounding principle, fair dealing, exists in some other common law jurisdictions but in fact it is more similar in principle to the enumerated exceptions found under civil law systems. Civil law jurisdictions have other limitations and exceptions to copyright.
In response to perceived over-expansion of copyrights, several electronic civil liberties and free expression organizations began in the 1990s to add fair use cases to their dockets and concerns. These include the Electronic Frontier Foundation ("EFF"), the American Civil Liberties Union, the National Coalition Against Censorship, the American Library Association, numerous clinical programs at law schools, and others. The "Chilling Effects" archive was established in 2002 as a coalition of several law school clinics and the EFF to document the use of cease and desist letters. In 2006 Stanford University began an initiative called "The Fair Use Project" (FUP) to help artists, particularly filmmakers, fight lawsuits brought against them by large corporations.
U.S. fair use factors
Examples of fair use in United States copyright law include commentary, search engines, criticism, parody, news reporting, research, and scholarship. Fair use provides for the legal, unlicensed citation or incorporation of copyrighted material in another author's work under a four-factor test.
The U.S. Supreme Court has traditionally characterized fair use as an affirmative defense, but in Lenz v. Universal Music Corp. (2015) (the "dancing baby" case), the U.S. Court of Appeals for the Ninth Circuit concluded that fair use was not merely a defense to an infringement claim, but was an expressly authorized right, and an exception to the exclusive rights granted to the author of a creative work by copyright law: "Fair use is therefore distinct from affirmative defenses where a use infringes a copyright, but there is no liability due to a valid excuse, e.g., misuse of a copyright."
The four factors of analysis for fair use set forth above derive from the opinion of Joseph Story in Folsom v. Marsh, in which the defendant had copied 353 pages from the plaintiff's 12-volume biography of George Washington in order to produce a separate two-volume work of his own. The court rejected the defendant's fair use defense with the following explanation:
The statutory fair use factors quoted above come from the Copyright Act of 1976, which is codified at . They were intended by Congress to restate, but not replace, the prior judge-made law. As Judge Pierre N. Leval has written, the statute does not "define or explain [fair use's] contours or objectives." While it "leav[es] open the possibility that other factors may bear on the question, the statute identifies none." That is, courts are entitled to consider other factors in addition to the four statutory factors.
1. Purpose and character of the use
The first factor is "the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes." To justify the use as fair, one must demonstrate how it either advances knowledge or the progress of the arts through the addition of something new.
In the 1841 copyright case Folsom v. Marsh, Justice Joseph Story wrote:
A key consideration in later fair use cases is the extent to which the use is transformative. In the 1994 decision Campbell v. Acuff-Rose Music Inc, the U.S. Supreme Court held that when the purpose of the use is transformative, this makes the first factor more likely to favor fair use. Before the Campbell decision, federal Judge Pierre Leval argued that transformativeness is central to the fair use analysis in his 1990 article, Toward a Fair Use Standard. Blanch v. Koons is another example of a fair use case that focused on transformativeness. In 2006, Jeff Koons used a photograph taken by commercial photographer Andrea Blanch in a collage painting. Koons appropriated a central portion of an advertisement she had been commissioned to shoot for a magazine. Koons prevailed in part because his use was found transformative under the first fair use factor.
The Campbell case also addressed the subfactor mentioned in the quotation above, "whether such use is of a commercial nature or is for nonprofit educational purposes." In an earlier case, Sony Corp. of America v. Universal City Studios, Inc., the Supreme Court had stated that "every commercial use of copyrighted material is presumptively ... unfair." In Campbell, the court clarified that this is not a "hard evidentiary presumption" and that even the tendency that commercial purpose will "weigh against a finding of fair use ... will vary with the context." The Campbell court held that hip-hop group 2 Live Crew's parody of the song "Oh, Pretty Woman" was fair use, even though the parody was sold for profit. Thus, having a commercial purpose does not preclude a use from being found fair, even though it makes it less likely.
Likewise, the noncommercial purpose of a use makes it more likely to be found a fair use, but it does not make it a fair use automatically. For instance, in L.A. Times v. Free Republic, the court found that the noncommercial use of Los Angeles Times content by the Free Republic website was not fair use, since it allowed the public to obtain material at no cost that they would otherwise pay for. Richard Story similarly ruled in Code Revision Commission and State of Georgia v. Public.Resource.Org, Inc. that despite the fact that it is a non-profit and didn't sell the work, the service profited from its unauthorized publication of the Official Code of Georgia Annotated because of "the attention, recognition, and contributions" it received in association with the work.
Another factor is whether the use fulfills any of the preamble purposes, also mentioned in the legislation above, as these have been interpreted as "illustrative" of transformative use.
It is arguable, given the dominance of a rhetoric of the "transformative" in recent fair use determinations, that the first factor and transformativeness in general have become the most important parts of fair use.
2. Nature of the copyrighted work
Although the Supreme Court has ruled that the availability of copyright protection should not depend on the artistic quality or merit of a work, fair use analyses consider certain aspects of the work to be relevant, such as whether it is fictional or non-fictional.
To prevent the private ownership of work that rightfully belongs in the public domain, facts and ideas are not protected by copyright—only their particular expression or fixation merits such protection. On the other hand, the social usefulness of freely available information can weigh against the appropriateness of copyright for certain fixations. The Zapruder film of the assassination of President Kennedy, for example, was purchased and copyrighted by Time magazine. Yet its copyright was not upheld, in the name of the public interest, when Time tried to enjoin the reproduction of stills from the film in a history book on the subject in Time Inc v. Bernard Geis Associates.
In the decisions of the Second Circuit in Salinger v. Random House and in New Era Publications Int'l v. Henry Holt & Co, the aspect of whether the copied work has been previously published was considered crucial, assuming the right of the original author to control the circumstances of the publication of his work or preference not to publish at all. However, Judge Pierre N. Leval views this importation of certain aspects of France's droit moral d'artiste (moral rights of the artist) into American copyright law as "bizarre and contradictory" because it sometimes grants greater protection to works that were created for private purposes that have little to do with the public goals of copyright law, than to those works that copyright was initially conceived to protect. This is not to claim that unpublished works, or, more specifically, works not intended for publication, do not deserve legal protection, but that any such protection should come from laws about privacy, rather than laws about copyright. The statutory fair use provision was amended in response to these concerns by adding a final sentence: "The fact that a work is unpublished shall not itself bar a finding of fair use if such finding is made upon consideration of all the above factors."
3. Amount and substantiality
The third factor assesses the amount and substantiality of the copyrighted work that has been used. In general, the less that is used in relation to the whole, the more likely the use will be considered fair.
Using most or all of a work does not bar a finding of fair use. It simply makes the third factor less favorable to the defendant. For instance, in Sony Corp. of America v. Universal City Studios, Inc. copying entire television programs for private viewing was upheld as fair use, at least when the copying is done for the purposes of time-shifting. In Kelly v. Arriba Soft Corporation, the Ninth Circuit held that copying an entire photo to use as a thumbnail in online search results did not even weigh against fair use, "if the secondary user only copies as much as is necessary for his or her intended use".
However, even the use of a small percentage of a work can make the third factor unfavorable to the defendant, because the "substantiality" of the portion used is considered in addition to the amount used. For instance, in Harper & Row v. Nation Enterprises, the U.S. Supreme Court held that a news article's quotation of fewer than 400 words from President Ford's 200,000-word memoir was sufficient to make the third fair use factor weigh against the defendants, because the portion taken was the "heart of the work". This use was ultimately found not to be fair.
4. Effect upon work's value
The fourth factor measures the effect that the allegedly infringing use has had on the copyright owner's ability to exploit his original work. The court not only investigates whether the defendant's specific use of the work has significantly harmed the copyright owner's market, but also whether such uses in general, if widespread, would harm the potential market of the original. The burden of proof here rests on the copyright owner, who must demonstrate the impact of the infringement on commercial use of the work.
For example, in Sony Corp v. Universal City Studios, the copyright owner, Universal, failed to provide any empirical evidence that the use of Betamax had either reduced their viewership or negatively impacted their business. In Harper & Row, the case regarding President Ford's memoirs, the Supreme Court labeled the fourth factor "the single most important element of fair use" and it has enjoyed some level of primacy in fair use analyses ever since. Yet the Supreme Court's more recent announcement in Campbell v. Acuff-Rose Music Inc that "all [four factors] are to be explored, and the results weighed together, in light of the purposes of copyright" has helped modulate this emphasis in interpretation.
In evaluating the fourth factor, courts often consider two kinds of harm to the potential market for the original work.
First, courts consider whether the use in question acts as a direct market substitute for the original work. In Campbell, the Supreme Court stated that "when a commercial use amounts to mere duplication of the entirety of the original, it clearly supersedes the object of the original and serves as a market replacement for it, making it likely that cognizable market harm to the original will occur". In one instance, a court ruled that this factor weighed against a defendant who had made unauthorized movie trailers for video retailers, since his trailers acted as direct substitutes for the copyright owner's official trailers.
Second, courts also consider whether potential market harm might exist beyond that of direct substitution, such as in the potential existence of a licensing market. This consideration has weighed against commercial copy shops that make copies of articles in course-packs for college students, when a market already existed for the licensing of course-pack copies.<ref name=PrincetonUP></ref>
Courts recognize that certain kinds of market harm do not negate fair use, such as when a parody or negative review impairs the market of the original work. Copyright considerations may not shield a work against adverse criticism.
Additional factors
As explained by Judge Leval, courts are permitted to include additional factors in their analysis.
One such factor is acknowledgement of the copyrighted source. Giving the name of the photographer or author may help, but it does not automatically make a use fair. While plagiarism and copyright infringement are related matters, they are not identical. Plagiarism (using someone's words, ideas, images, etc. without acknowledgment) is a matter of professional ethics, while copyright is a matter of law, and protects exact expression, not ideas. One can plagiarize even a work that is not protected by copyright, for example by passing off a line from Shakespeare as one's own. Conversely, attribution prevents accusations of plagiarism, but it does not prevent infringement of copyright. For example, reprinting a copyrighted book without permission, while citing the original author, would be copyright infringement but not plagiarism.
U.S. fair use procedure and practice
The U.S. Supreme Court described fair use as an affirmative defense in Campbell v. Acuff-Rose Music, Inc. This means that in litigation on copyright infringement, the defendant bears the burden of raising and proving that the use was fair and not an infringement. Thus, fair use need not even be raised as a defense unless the plaintiff first shows (or the defendant concedes) a case of copyright infringement. If the work was not copyrightable, the term had expired, or the defendant's work borrowed only a small amount, for instance, then the plaintiff cannot make out a case of infringement, and the defendant need not even raise the fair use defense. In addition, fair use is only one of many limitations, exceptions, and defenses to copyright infringement. Thus, a case can be defeated without relying on fair use. For instance, the Audio Home Recording Act establishes that it is legal, using certain technologies, to make copies of audio recordings for non-commercial personal use.
Some copyright owners claim infringement even in circumstances where the fair use defense would likely succeed, in hopes that the user will refrain from the use rather than spending resources in their defense. Strategic lawsuit against public participation (SLAPP) cases that allege copyright infringement, patent infringement, defamation, or libel may come into conflict with the defendant's right to freedom of speech, and that possibility has prompted some jurisdictions to pass anti-SLAPP legislation that raises the plaintiff's burdens and risk.
Although fair use ostensibly permits certain uses without liability, many content creators and publishers try to avoid a potential court battle by seeking a legally unnecessary license from copyright owners for any use of non-public domain material, even in situations where a fair use defense would likely succeed. The simple reason is that the license terms negotiated with the copyright owner may be much less expensive than defending against a copyright suit, or having the mere possibility of a lawsuit threaten the publication of a work in which a publisher has invested significant resources.
Fair use rights take precedence over the author's interest. Thus the copyright holder cannot use a non-binding disclaimer, or notification, to revoke the right of fair use on works. However, binding agreements such as contracts or licence agreements may take precedence over fair use rights.
The practical effect of the fair use doctrine is that a number of conventional uses of copyrighted works are not considered infringing. For instance, quoting from a copyrighted work in order to criticize or comment upon it or teach students about it, is considered a fair use. Certain well-established uses cause few problems. A teacher who prints a few copies of a poem to illustrate a technique will have no problem on all four of the above factors (except possibly on amount and substantiality), but some cases are not so clear. All the factors are considered and balanced in each case: a book reviewer who quotes a paragraph as an example of the author's style will probably fall under fair use even though they may sell their review commercially; but a non-profit educational website that reproduces whole articles from technical magazines will probably be found to infringe if the publisher can demonstrate that the website affects the market for the magazine, even though the website itself is non-commercial.
Fair use is decided on a case-by-case basis, on the entirety of circumstances. The same act done by different means or for a different purpose can gain or lose fair use status.
Fair use in particular areas
Computer code
The Oracle America, Inc. v. Google, Inc. case revolves around the use of application programming interfaces (APIs) used to define functionality of the Java programming language, created by Sun Microsystems and now owned by Oracle Corporation. Google used the APIs' definition and their structure, sequence and organization (SSO) in creating the Android operating system to support the mobile device market. Oracle had sued Google in 2010 over both patent and copyright violations, but after two cycles, the case matter was narrowed down to whether Google's use of the definition and SSO of Oracle's Java APIs (determined to be copyrightable) was within fair use. The Federal Circuit Court of Appeals has ruled against Google, stating that while Google could defend its use in the nature of the copyrighted work, its use was not transformative, and more significantly, it commercially harmed Oracle as they were also seeking entry to the mobile market. However, the U.S. Supreme Court reversed this decision, deciding that Google's actions satisfy all four tests for fair use, and that granting Oracle exclusive rights to use Java APIs on mobile markets "would interfere with, not further, copyright’s basic creativity objectives.”
Documentary films
In April 2006, the filmmakers of the Loose Change series were served with a lawsuit by Jules and Gédéon Naudet over the film's use of their footage, specifically footage of the firefighters discussing the collapse of the World Trade Center.
With the help of an intellectual property lawyer, the creators of Loose Change successfully argued that a majority of the footage used was for historical purposes and was significantly transformed in the context of the film. They agreed to remove a few shots that were used as B-roll and served no purpose to the greater discussion. The case was settled and a potential multimillion-dollar lawsuit was avoided.This Film Is Not Yet Rated also relied on fair use to feature several clips from copyrighted Hollywood productions. The director had originally planned to license these clips from their studio owners but discovered that studio licensing agreements would have prohibited him from using this material to criticize the entertainment industry. This prompted him to invoke the fair use doctrine, which permits limited use of copyrighted material to provide analysis and criticism of published works.
File sharing
In 2009, fair use appeared as a defense in lawsuits against filesharing. Charles Nesson argued that file-sharing qualifies as fair use in his defense of alleged filesharer Joel Tenenbaum. Kiwi Camara, defending alleged filesharer Jammie Thomas, announced a similar defense.
However, the Court in the case at bar rejected the idea that file-sharing is fair use.
Internet publication
A U.S. court case from 2003, Kelly v. Arriba Soft Corp., provides and develops the relationship between thumbnails, inline linking, and fair use. In the lower District Court case on a motion for summary judgment, Arriba Soft's use of thumbnail pictures and inline linking from Kelly's website in Arriba Soft's image search engine was found not to be fair use. That decision was appealed and contested by Internet rights activists such as the Electronic Frontier Foundation, who argued that it was fair use.
On appeal, the Ninth Circuit Court of Appeals found in favor of the defendant, Arriba Soft. In reaching its decision, the court utilized the statutory four-factor analysis. First, it found the purpose of creating the thumbnail images as previews to be sufficiently transformative, noting that they were not meant to be viewed at high resolution as the original artwork was. Second, the photographs had already been published, diminishing the significance of their nature as creative works. Third, although normally making a "full" replication of a copyrighted work may appear to violate copyright, here it was found to be reasonable and necessary in light of the intended use. Lastly, the court found that the market for the original photographs would not be substantially diminished by the creation of the thumbnails. To the contrary, the thumbnail searches could increase the exposure of the originals. In looking at all these factors as a whole, the court found that the thumbnails were fair use and remanded the case to the lower court for trial after issuing a revised opinion on July 7, 2003. The remaining issues were resolved with a default judgment after Arriba Soft had experienced significant financial problems and failed to reach a negotiated settlement.
In August 2008, Judge Jeremy Fogel of the Northern District of California ruled in Lenz v. Universal Music Corp. that copyright holders cannot order a deletion of an online file without determining whether that posting reflected "fair use" of the copyrighted material. The case involved Stephanie Lenz, a writer and editor from Gallitzin, Pennsylvania, who made a home video of her thirteen-month-old son dancing to Prince's song "Let's Go Crazy" and posted the video on YouTube. Four months later, Universal Music, the owner of the copyright to the song, ordered YouTube to remove the video under the Digital Millennium Copyright Act. Lenz notified YouTube immediately that her video was within the scope of fair use, and she demanded that it be restored. YouTube complied after six weeks, rather than the two weeks required by the Digital Millennium Copyright Act. Lenz then sued Universal Music in California for her legal costs, claiming the music company had acted in bad faith by ordering removal of a video that represented fair use of the song. On appeal, the Court of Appeals for the Ninth Circuit ruled that a copyright owner must affirmatively consider whether the complained of conduct constituted fair use before sending a takedown notice under the Digital Millennium Copyright Act, rather than waiting for the alleged infringer to assert fair use. 801 F.3d 1126 (9th Cir. 2015). "Even if, as Universal urges, fair use is classified as an 'affirmative defense,' we hold—for the purposes of the DMCA—fair use is uniquely situated in copyright law so as to be treated differently than traditional affirmative defenses. We conclude that because 17 U.S.C. § 107 created a type of non-infringing use, fair use is "authorized by the law" and a copyright holder must consider the existence of fair use before sending a takedown notification under § 512(c)."
In June 2011, Judge Philip Pro of the District of Nevada ruled in Righthaven v. Hoehn that the posting of an entire editorial article from the Las Vegas Review-Journal in a comment as part of an online discussion was unarguably fair use. Judge Pro noted that "Noncommercial, nonprofit use is presumptively fair. ... Hoehn posted the Work as part of an online discussion. ... This purpose is consistent with comment, for which 17 U.S.C. § 107 provides fair use protection. ... It is undisputed that Hoehn posted the entire work in his comment on the Website. ... wholesale copying does not preclude a finding of fair use. ... there is no genuine issue of material fact that Hoehn's use of the Work was fair and summary judgment is appropriate." On appeal, the Court of Appeals for the Ninth Circuit ruled that Righthaven did not even have the standing needed to sue Hoehn for copyright infringement in the first place.
Professional communities
In addition to considering the four fair use factors, courts deciding fair use cases also look to the standards and practices of the professional community where the case comes from. Among the communities are documentarians, librarians, makers of Open Courseware, visual art educators, and communications professors.
Such codes of best practices have permitted communities of practice to make more informed risk assessments in employing fair use in their daily practice. For instance, broadcasters, cablecasters, and distributors typically require filmmakers to obtain errors and omissions insurance before the distributor will take on the film. Such insurance protects against errors and omissions made during the copyright clearance of material in the film. Before the Documentary Filmmakers' Statement of Best Practices in Fair Use was created in 2005, it was nearly impossible to obtain errors and omissions insurance for copyright clearance work that relied in part on fair use. This meant documentarians had either to obtain a license for the material or to cut it from their films. In many cases, it was impossible to license the material because the filmmaker sought to use it in a critical way. Soon after the best practices statement was released, all errors and omissions insurers in the U.S. shifted to begin offering routine fair use coverage.
Music sampling
Before 1991, sampling in certain genres of music was accepted practice and the copyright considerations were viewed as largely irrelevant. The strict decision against rapper Biz Markie's appropriation of a Gilbert O'Sullivan song in the case Grand Upright Music, Ltd. v. Warner Bros. Records Inc. changed practices and opinions overnight. Samples now had to be licensed, as long as they rose "to a level of legally cognizable appropriation." This left the door open for the de minimis doctrine, for short or unrecognizable samples; such uses would not rise to the level of copyright infringement, because under the de minimis doctrine, "the law does not care about trifles." However, three years later, the Sixth Circuit effectively eliminated the de minimis doctrine in the Bridgeport Music, Inc. v. Dimension Films case, holding that artists must "get a license or do not sample". The Court later clarified that its opinion did not apply to fair use, but between Grand Upright and Bridgeport, practice had effectively shifted to eliminate unlicensed sampling.
Parody
Producers or creators of parodies of a copyrighted work have been sued for infringement by the targets of their ridicule, even though such use may be protected as fair use. These fair use cases distinguish between parodies, which use a work in order to poke fun at or comment on the work itself and satire, or comment on something else. Courts have been more willing to grant fair use protections to parodies than to satires, but the ultimate outcome in either circumstance will turn on the application of the four fair use factors.
For example, when Tom Forsythe appropriated Barbie dolls for his photography project "Food Chain Barbie" (depicting several copies of the doll naked and disheveled and about to be baked in an oven, blended in a food mixer, and the like), Mattel lost its copyright infringement lawsuit against him because his work effectively parodies Barbie and the values she represents. In Rogers v. Koons, Jeff Koons tried to justify his appropriation of Art Rogers' photograph "Puppies" in his sculpture "String of Puppies" with the same parody defense. Koons lost because his work was not presented as a parody of Rogers' photograph in particular, but as a satire of society at large. This was insufficient to render the use fair.
In Campbell v. Acuff-Rose Music Inc the U.S. Supreme Court recognized parody as a potential fair use, even when done for profit. Roy Orbison's, Acuff-Rose Music, had sued 2 Live Crew in 1989 for their use of Orbison's "Oh, Pretty Woman" in a mocking rap version with altered lyrics. The Supreme Court viewed 2 Live Crew's version as a ridiculing commentary on the earlier work, and ruled that when the parody was itself the product rather than mere advertising, commercial nature did not bar the defense. The Campbell court also distinguished parodies from satire, which they described as a broader social critique not intrinsically tied to ridicule of a specific work and so not deserving of the same use exceptions as parody because the satirist's ideas are capable of expression without the use of the other particular work.
A number of appellate decisions have recognized that a parody may be a protected fair use, including the Second (Leibovitz v. Paramount Pictures Corp.); the Ninth (Mattel v. Walking Mountain Productions); and the Eleventh Circuits (Suntrust Bank v. Houghton Mifflin Co.). In the 2001 Suntrust Bank case, Suntrust Bank and the Margaret Mitchell estate unsuccessfully brought suit to halt the publication of The Wind Done Gone, which reused many of the characters and situations from Gone with the Wind but told the events from the point of view of the enslaved people rather than the slaveholders. The Eleventh Circuit, applying Campbell, found that The Wind Done Gone was fair use and vacated the district court's injunction against its publication.
Cases in which a satirical use was found to be fair include Blanch v. Koons and Williams v. Columbia Broadcasting Systems.
Text and data mining
The transformative nature of computer based analytical processes such as text mining, web mining and data mining has led many to form the view that such uses would be protected under fair use. This view was substantiated by the rulings of Judge Denny Chin in Authors Guild, Inc. v. Google, Inc., a case involving mass digitisation of millions of books from research library collections. As part of the ruling that found the book digitisation project was fair use, the judge stated "Google Books is also transformative in the sense that it has transformed book text into data for purposes of substantive research, including data mining and text mining in new areas".
Text and data mining was subject to further review in Authors Guild v. HathiTrust, a case derived from the same digitization project mentioned above. Judge Harold Baer, in finding that the defendant's uses were transformative, stated that 'the search capabilities of the [HathiTrust Digital Library] have already given rise to new methods of academic inquiry such as text mining."
Reverse engineering
There is a substantial body of fair use law regarding reverse engineering of computer software, hardware, network protocols, encryption and access control systems.
Social media
In May 2015, artist Richard Prince released an exhibit of photographs at the Gagosian Gallery in New York, entitled "New Portraits". His exhibit consisted of screenshots of Instagram users' pictures, which were largely unaltered, with Prince's commentary added beneath. Although no Instagram users authorized Prince to use their pictures, Prince argued that the addition of his own commentary the pictures constituted fair use, such that he did not need permission to use the pictures or to pay royalties for his use. One of the pieces sold for $90,000. With regard to the works presented by Painter, the gallery where the pictures were showcased posted notices that "All images are subject to copyright." Several lawsuits were filed against Painter over the New Portraits exhibit.
Influence internationally
While U.S. fair use law has been influential in some countries, some countries have fair use criteria drastically different from those in the U.S., and some countries do not have a fair use framework at all. Some countries have the concept of fair dealing instead of fair use, while others use different systems of limitations and exceptions to copyright. Many countries have some reference to an exemption for educational use, though the extent of this exemption varies widely.
Sources differ on whether fair use is fully recognized by countries other than the United States. American University's infojustice.org published a compilation of portions of over 40 nations' laws that explicitly mention fair use or fair dealing, and asserts that some of the fair dealing laws, such as Canada's, have evolved (such as through judicial precedents) to be quite close to those of the United States. This compilation includes fair use provisions from Bangladesh, Israel, South Korea, the Philippines, Sri Lanka, Taiwan, Uganda, and the United States. However, Paul Geller's 2009 International Copyright Law and Practice says that while some other countries recognize similar exceptions to copyright, only the United States and Israel fully recognize the concept of fair use.
The International Intellectual Property Alliance (IIPA), a lobby group of U.S. copyright industry bodies, has objected to international adoption of U.S.-style fair use exceptions, alleging that such laws have a dependency on common law and long-term legal precedent that may not exist outside the United States.
Israel
In November 2007, the Israeli Knesset passed a new copyright law that included a U.S.-style fair use exception. The law, which took effect in May 2008, permits the fair use of copyrighted works for purposes such as private study, research, criticism, review, news reporting, quotation, or instruction or testing by an educational institution. The law sets up four factors, similar to the U.S. fair use factors (see above), for determining whether a use is fair.
On September 2, 2009, the Tel Aviv District court ruled in The Football Association Premier League Ltd. v. Ploni that fair use is a user right. The court also ruled that streaming of live soccer games on the Internet is fair use. In doing so, the court analyzed the four fair use factors adopted in 2007 and cited U.S. case law, including Kelly v. Arriba Soft Corp. and Perfect 10, Inc. v. Amazon.com, Inc..
Malaysia
An amendment in 2012 to the section 13(2)(a) of the Copyright Act 1987 created an exception called 'fair dealing' which is not restricted in its purpose. The four factors for fair use as specified in US law are included.
Poland
Fair use exists in Polish law and is covered by the Polish copyright law articles 23 to 35.
Compared to the United States, Polish fair use distinguishes between private and public use. In Poland, when the use is public, its use risks fines. The defendant must also prove that his use was private when accused that it was not, or that other mitigating circumstances apply. Finally, Polish law treats all cases in which private material was made public as a potential copyright infringement, where fair use can apply, but has to be proven by reasonable circumstances.
Singapore
Section 35 of the Singaporean Copyright Act 1987 has been amended in 2004 to allow a 'fair dealing' exception for any purpose. The four fair use factors similar to US law are included in the new section 35.
South Korea
The Korean Copyright Act was amended to include a fair use provision, Article 35–3, in 2012. The law outlines a four-factor test similar to that used under U.S. law:
Fair dealing
Fair dealing allows specific exceptions to copyright protections. The open-ended concept of fair use is generally not observed in jurisdictions where fair dealing is in place, although this does vary. Fair dealing is established in legislation in Australia, Canada, New Zealand, Singapore, India, South Africa and the United Kingdom, among others.
Australia
While Australian copyright exceptions are based on the Fair Dealing system, since 1998 a series of Australian government inquiries have examined, and in most cases recommended, the introduction of a "flexible and open" Fair Use system into Australian copyright law. From 1998 to 2017 there have been eight Australian government inquiries which have considered the question of whether fair use should be adopted in Australia. Six reviews have recommended Australia adopt a "Fair Use" model of copyright exceptions: two enquiries specifically into the Copyright Act (1998, 2014); and four broader reviews (both 2004, 2013, 2016). One review (2000) recommended against the introduction of fair use and another (2005) issued no final report. Two of the recommendations were specifically in response to the stricter copyright rules introduced as part of the Australia–United States Free Trade Agreement (AUSFTA), while the most recent two, by the Australian Law Reform Commission (ALRC) and the Productivity Commission (PC) were with reference to strengthening Australia's "digital economy".
Canada
The Copyright Act of Canada establishes fair dealing in Canada, which allows specific exceptions to copyright protection. In 1985, the Sub-Committee on the Revision of Copyright rejected replacing fair dealing with an open-ended system, and in 1986 the Canadian government agreed that "the present fair dealing provisions should not be replaced by the substantially wider 'fair use' concept". Since then, the Canadian fair dealing exception has broadened. It is now similar in effect to U.S. fair use, even though the frameworks are different.
CCH Canadian Ltd v. Law Society of Upper Canada [2004] 1 S.C.R. 339, is a landmark Supreme Court of Canada case that establishes the bounds of fair dealing in Canadian copyright law. The Law Society of Upper Canada was sued for copyright infringement for providing photocopy services to researchers. The Court unanimously held that the Law Society's practice fell within the bounds of fair dealing.
United Kingdom
Within the United Kingdom, fair dealing is a legal doctrine that provides an exception to the nation's copyright law in cases where the copyright infringement is for the purposes of non-commercial research or study, criticism or review, or for the reporting of current events.
Policy arguments about fair use
A balanced copyright law provides an economic benefit to many high-tech businesses such as search engines and software developers. Fair use is also crucial to non-technology industries such as insurance, legal services, and newspaper publishers.
On September 12, 2007, the Computer and Communications Industry Association (CCIA), a group representing companies including Google Inc., Microsoft Inc., Oracle Corporation, Sun Microsystems, Yahoo! and other high-tech companies, released a study that found that fair use exceptions to US copyright laws were responsible for more than $4.5 trillion in annual revenue for the United States economy representing one-sixth of the total US GDP. The study was conducted using a methodology developed by the World Intellectual Property Organization.
The study found that fair use dependent industries are directly responsible for more than eighteen percent of US economic growth and nearly eleven million American jobs. "As the United States economy becomes increasingly knowledge-based, the concept of fair use can no longer be discussed and legislated in the abstract. It is the very foundation of the digital age and a cornerstone of our economy," said Ed Black, President and CEO of CCIA. "Much of the unprecedented economic growth of the past ten years can actually be credited to the doctrine of fair use, as the Internet itself depends on the ability to use content in a limited and unlicensed manner."
Fair Use Week
Fair Use Week is an international event that celebrates fair use and fair dealing. Fair Use Week was first proposed on a Fair Use Allies listserv, which was an outgrowth of the Library Code of Best Practices Capstone Event, celebrating the development and promulgation of ARL's Code of Best Practices in Fair Use for Academic and Research Libraries''. While the idea was not taken up nationally, Copyright Advisor at Harvard University, launched the first ever Fair Use Week at Harvard University in February 2014, with a full week of activities celebrating fair use. The first Fair Use Week included blog posts from national and international fair use experts, live fair use panels, fair use workshops, and a Fair Use Stories Tumblr blog, where people from the world of art, music, film, and academia shared stories about the importance of fair use to their community. The first Fair Use Week was so successful that in 2015 ARL teamed up with Courtney and helped organize the Second Annual Fair Use Week, with participation from many more institutions. ARL also launched an official Fair Use Week website, which was transferred from Pia Hunter, who attended the Library Code of Best Practices Capstone Event and had originally purchased the domain name fairuseweek.org.
See also
References
Further reading
United States. Congress. House of Representatives (2014). The Scope of Fair Use: Hearing Before the Subcommittee on Courts, Intellectual Property, and the Internet of the Committee on the Judiciary, House of Representatives, One Hundred Thirteenth Congress, Second Session, January 28, 2014.
External links
U.S. Copyright Office Fair Use Index, a searchable database of notable fair use cases in U.S. federal courts
The Fair Use/Fair Dealing Handbook, a compilation of national statutes that explicitly refer to fair use or fair dealing
CHEER, a repository of copyright educational resources for higher education
Digital rights
Equitable defenses
Legal doctrines and principles
United States copyright law
United States intellectual property law |
10969 | https://en.wikipedia.org/wiki/Field-programmable%20gate%20array | Field-programmable gate array | A field-programmable gate array (FPGA) is an integrated circuit designed to be configured by a customer or a designer after manufacturinghence the term field-programmable. The FPGA configuration is generally specified using a hardware description language (HDL), similar to that used for an application-specific integrated circuit (ASIC). Circuit diagrams were previously used to specify the configuration, but this is increasingly rare due to the advent of electronic design automation tools.
FPGAs contain an array of programmable logic blocks, and a hierarchy of reconfigurable interconnects allowing blocks to be wired together. Logic blocks can be configured to perform complex combinational functions, or act as simple logic gates like AND and XOR. In most FPGAs, logic blocks also include memory elements, which may be simple flip-flops or more complete blocks of memory. Many FPGAs can be reprogrammed to implement different logic functions, allowing flexible reconfigurable computing as performed in computer software.
FPGAs have a remarkable role in embedded system development due to their capability to start system software (SW) development simultaneously with hardware (HW),
enable system performance simulations at a very early phase of the development, and allow various system partitioning (SW and HW) trials and iterations before final freezing of the system architecture.
History
The FPGA industry sprouted from programmable read-only memory (PROM) and programmable logic devices (PLDs). PROMs and PLDs both had the option of being programmed in batches in a factory or in the field (field-programmable). However, programmable logic was hard-wired between logic gates.
Altera was founded in 1983 and delivered the industry's first reprogrammable logic device in 1984 – the EP300 – which featured a quartz window in the package that allowed users to shine an ultra-violet lamp on the die to erase the EPROM cells that held the device configuration.
Xilinx co-founders Ross Freeman and Bernard Vonderschmitt invented the first commercially viable field-programmable gate array in 1985 – the XC2064. The XC2064 had programmable gates and programmable interconnects between gates, the beginnings of a new technology and market. The XC2064 had 64 configurable logic blocks (CLBs), with two three-input lookup tables (LUTs). More than 20 years later, Freeman was entered into the National Inventors Hall of Fame for his invention.
In 1987, the Naval Surface Warfare Center funded an experiment proposed by Steve Casselman to develop a computer that would implement 600,000 reprogrammable gates. Casselman was successful and a patent related to the system was issued in 1992.
Altera and Xilinx continued unchallenged and quickly grew from 1985 to the mid-1990s when competitors sprouted up, eroding a significant portion of their market share. By 1993, Actel (now Microsemi) was serving about 18 percent of the market.
The 1990s were a period of rapid growth for FPGAs, both in circuit sophistication and the volume of production. In the early 1990s, FPGAs were primarily used in telecommunications and networking. By the end of the decade, FPGAs found their way into consumer, automotive, and industrial applications.
By 2013, Altera (31 percent), Actel (10 percent) and Xilinx (36 percent) together represented approximately 77 percent of the FPGA market.
Companies like Microsoft have started to use FPGAs to accelerate high-performance, computationally intensive systems (like the data centers that operate their Bing search engine), due to the performance per watt advantage FPGAs deliver. Microsoft began using FPGAs to accelerate Bing in 2014, and in 2018 began deploying FPGAs across other data center workloads for their Azure cloud computing platform.
The following timelines indicate progress in different aspects of FPGA design:
Gates
1987: 9,000 gates, Xilinx
1992: 600,000, Naval Surface Warfare Department
Early 2000s: millions
2013: 50 million, Xilinx
Market size
1985: First commercial FPGA : Xilinx XC2064
1987: $14 million
: >$385 million
2005: $1.9 billion
2010 estimates: $2.75 billion
2013: $5.4 billion
2020 estimate: $9.8 billion
Design starts
A design start is a new custom design for implementation on an FPGA.
2005: 80,000
2008: 90,000
Design
Contemporary FPGAs have large resources of logic gates and RAM blocks to implement complex digital computations. As FPGA designs employ very fast I/O rates and bidirectional data buses, it becomes a challenge to verify correct timing of valid data within setup time and hold time.
Floor planning enables resource allocation within FPGAs to meet these time constraints. FPGAs can be used to implement any logical function that an ASIC can perform. The ability to update the functionality after shipping, partial re-configuration of a portion of the design and the low non-recurring engineering costs relative to an ASIC design (notwithstanding the generally higher unit cost), offer advantages for many applications.
Some FPGAs have analog features in addition to digital functions. The most common analog feature is a programmable slew rate on each output pin, allowing the engineer to set low rates on lightly loaded pins that would otherwise ring or couple unacceptably, and to set higher rates on heavily loaded pins on high-speed channels that would otherwise run too slowly. Also common are quartz-crystal oscillators, on-chip resistance-capacitance oscillators, and phase-locked loops with embedded voltage-controlled oscillators used for clock generation and management as well as for high-speed serializer-deserializer (SERDES) transmit clocks and receiver clock recovery. Fairly common are differential comparators on input pins designed to be connected to differential signaling channels. A few "mixed signal FPGAs" have integrated peripheral analog-to-digital converters (ADCs) and digital-to-analog converters (DACs) with analog signal conditioning blocks allowing them to operate as a system-on-a-chip (SoC). Such devices blur the line between an FPGA, which carries digital ones and zeros on its internal programmable interconnect fabric, and field-programmable analog array (FPAA), which carries analog values on its internal programmable interconnect fabric.
Logic blocks
The most common FPGA architecture consists of an array of logic blocks (called configurable logic blocks, CLBs, or logic array blocks, LABs, depending on vendor), I/O pads, and routing channels. Generally, all the routing channels have the same width (number of wires). Multiple I/O pads may fit into the height of one row or the width of one column in the array.
An application circuit must be mapped into an FPGA with adequate resources. While the number of CLBs/LABs and I/Os required is easily determined from the design, the number of routing tracks needed may vary considerably even among designs with the same amount of logic.
For example, a crossbar switch requires much more routing than a systolic array with the same gate count. Since unused routing tracks increase the cost (and decrease the performance) of the part without providing any benefit, FPGA manufacturers try to provide just enough tracks so that most designs that will fit in terms of lookup tables (LUTs) and I/Os can be routed. This is determined by estimates such as those derived from Rent's rule or by experiments with existing designs. , network-on-chip architectures for routing and interconnection are being developed.
In general, a logic block consists of a few logical cells (called ALM, LE, slice etc.). A typical cell consists of a 4-input LUT, a full adder (FA) and a D-type flip-flop. These might be split into two 3-input LUTs. In normal mode those are combined into a 4-input LUT through the first multiplexer (mux). In arithmetic mode, their outputs are fed to the adder. The selection of mode is programmed into the second mux. The output can be either synchronous or asynchronous, depending on the programming of the third mux. In practice, entire or parts of the adder are stored as functions into the LUTs in order to save space.
Hard blocks
Modern FPGA families expand upon the above capabilities to include higher level functionality fixed in silicon. Having these common functions embedded in the circuit reduces the area required and gives those functions increased speed compared to building them from logical primitives. Examples of these include multipliers, generic DSP blocks, embedded processors, high speed I/O logic and embedded memories.
Higher-end FPGAs can contain high speed multi-gigabit transceivers and hard IP cores such as processor cores, Ethernet medium access control units, PCI/PCI Express controllers, and external memory controllers. These cores exist alongside the programmable fabric, but they are built out of transistors instead of LUTs so they have ASIC-level performance and power consumption without consuming a significant amount of fabric resources, leaving more of the fabric free for the application-specific logic. The multi-gigabit transceivers also contain high performance analog input and output circuitry along with high-speed serializers and deserializers, components which cannot be built out of LUTs. Higher-level physical layer (PHY) functionality such as line coding may or may not be implemented alongside the serializers and deserializers in hard logic, depending on the FPGA.
Soft core
An alternate approach to using hard-macro processors is to make use of soft processor IP cores that are implemented within the FPGA logic. Nios II, MicroBlaze and Mico32 are examples of popular softcore processors. Many modern FPGAs are programmed at "run time", which has led to the idea of reconfigurable computing or reconfigurable systems – CPUs that reconfigure themselves to suit the task at hand. Additionally, new, non-FPGA architectures are beginning to emerge. Software-configurable microprocessors such as the Stretch S5000 adopt a hybrid approach by providing an array of processor cores and FPGA-like programmable cores on the same chip.
Integration
In 2012 the coarse-grained architectural approach was taken a step further by combining the logic blocks and interconnects of traditional FPGAs with embedded microprocessors and related peripherals to form a complete "system on a programmable chip". This work mirrors the architecture created by Ron Perloff and Hanan Potash of Burroughs Advanced Systems Group in 1982 which combined a reconfigurable CPU architecture on a single chip called the SB24. Examples of such hybrid technologies can be found in the Xilinx Zynq-7000 all Programmable SoC, which includes a 1.0 GHz dual-core ARM Cortex-A9 MPCore processor embedded within the FPGA's logic fabric or in the Altera Arria V FPGA, which includes an 800 MHz dual-core ARM Cortex-A9 MPCore. The Atmel FPSLIC is another such device, which uses an AVR processor in combination with Atmel's programmable logic architecture. The Microsemi SmartFusion devices incorporate an ARM Cortex-M3 hard processor core (with up to 512 kB of flash and 64 kB of RAM) and analog peripherals such as a multi-channel analog-to-digital converters and digital-to-analog converters to their flash memory-based FPGA fabric.
Clocking
Most of the circuitry built inside of an FPGA is synchronous circuitry that requires a clock signal. FPGAs contain dedicated global and regional routing networks for clock and reset so they can be delivered with minimal skew. Also, FPGAs generally contain analog phase-locked loop and/or delay-locked loop components to synthesize new clock frequencies as well as attenuate jitter. Complex designs can use multiple clocks with different frequency and phase relationships, each forming separate clock domains. These clock signals can be generated locally by an oscillator or they can be recovered from a high speed serial data stream. Care must be taken when building clock domain crossing circuitry to avoid metastability. FPGAs generally contain blocks of RAMs that are capable of working as dual port RAMs with different clocks, aiding in the construction of building FIFOs and dual port buffers that connect differing clock domains.
3D architectures
To shrink the size and power consumption of FPGAs, vendors such as Tabula and Xilinx have introduced 3D or stacked architectures. Following the introduction of its 28 nm 7-series FPGAs, Xilinx said that several of the highest-density parts in those FPGA product lines will be constructed using multiple dies in one package, employing technology developed for 3D construction and stacked-die assemblies.
Xilinx's approach stacks several (three or four) active FPGA dies side by side on a silicon interposer – a single piece of silicon that carries passive interconnect. The multi-die construction also allows different parts of the FPGA to be created with different process technologies, as the process requirements are different between the FPGA fabric itself and the very high speed 28 Gbit/s serial transceivers. An FPGA built in this way is called a heterogeneous FPGA.
Altera's heterogeneous approach involves using a single monolithic FPGA die and connecting other die/technologies to the FPGA using Intel's embedded multi-die interconnect bridge (EMIB) technology.
Programming
To define the behavior of the FPGA, the user provides a design in a hardware description language (HDL) or as a schematic design. The HDL form is more suited to work with large structures because it's possible to specify high-level functional behavior rather than drawing every piece by hand. However, schematic entry can allow for easier visualization of a design and its component modules.
Using an electronic design automation tool, a technology-mapped netlist is generated. The netlist can then be fit to the actual FPGA architecture using a process called place-and-route, usually performed by the FPGA company's proprietary place-and-route software. The user will validate the map, place and route results via timing analysis, simulation, and other verification and validation methodologies. Once the design and validation process is complete, the binary file generated, typically using the FPGA vendor's proprietary software, is used to (re-)configure the FPGA. This file is transferred to the FPGA/CPLD via a serial interface (JTAG) or to an external memory device like an EEPROM.
The most common HDLs are VHDL and Verilog as well as extensions such as SystemVerilog. However, in an attempt to reduce the complexity of designing in HDLs, which have been compared to the equivalent of assembly languages, there are moves to raise the abstraction level through the introduction of alternative languages. National Instruments' LabVIEW graphical programming language (sometimes referred to as "G") has an FPGA add-in module available to target and program FPGA hardware. Verilog was created to simplify the process making HDL more robust and flexible. Verilog is currently the most popular. Verilog creates a level of abstraction to hide away the details of its implementation. Verilog has a C-like syntax, unlike VHDL.
To simplify the design of complex systems in FPGAs, there exist libraries of predefined complex functions and circuits that have been tested and optimized to speed up the design process. These predefined circuits are commonly called intellectual property (IP) cores, and are available from FPGA vendors and third-party IP suppliers. They are rarely free, and typically released under proprietary licenses. Other predefined circuits are available from developer communities such as OpenCores (typically released under free and open source licenses such as the GPL, BSD or similar license), and other sources. Such designs are known as "open-source hardware."
In a typical design flow, an FPGA application developer will simulate the design at multiple stages throughout the design process. Initially the RTL description in VHDL or Verilog is simulated by creating test benches to simulate the system and observe results. Then, after the synthesis engine has mapped the design to a netlist, the netlist is translated to a gate-level description where simulation is repeated to confirm the synthesis proceeded without errors. Finally the design is laid out in the FPGA at which point propagation delays can be added and the simulation run again with these values back-annotated onto the netlist.
More recently, OpenCL (Open Computing Language) is being used by programmers to take advantage of the performance and power efficiencies that FPGAs provide. OpenCL allows programmers to develop code in the C programming language and target FPGA functions as OpenCL kernels using OpenCL constructs. For further information, see high-level synthesis and C to HDL.
Most FPGAs rely on an SRAM-based approach to be programmed. These FPGAs are in-system programmable and re-programmable, but require external boot devices. For example, flash memory or EEPROM devices may often load contents into internal SRAM that controls routing and logic. The SRAM approach is based on CMOS.
Rarer alternatives to the SRAM approach include:
Fuse: One-time programmable. Bipolar. Obsolete.
Antifuse: One-time programmable. CMOS. Examples: Actel SX and Axcelerator families; Quicklogic Eclipse II family.
PROM: Programmable Read-Only Memory technology. One-time programmable because of plastic packaging. Obsolete.
EPROM: Erasable Programmable Read-Only Memory technology. One-time programmable but with window, can be erased with ultraviolet (UV) light. CMOS. Obsolete.
EEPROM: Electrically Erasable Programmable Read-Only Memory technology. Can be erased, even in plastic packages. Some but not all EEPROM devices can be in-system programmed. CMOS.
Flash: Flash-erase EPROM technology. Can be erased, even in plastic packages. Some but not all flash devices can be in-system programmed. Usually, a flash cell is smaller than an equivalent EEPROM cell and is therefore less expensive to manufacture. CMOS. Example: Actel ProASIC family.
Major manufacturers
In 2016, long-time industry rivals Xilinx (now part of AMD) and Altera (now an Intel subsidiary) were the FPGA market leaders. At that time, they controlled nearly 90 percent of the market.
Both Xilinx (now AMD) and Altera (now Intel) provide proprietary electronic design automation software for Windows and Linux (ISE/Vivado and Quartus) which enables engineers to design, analyze, simulate, and synthesize (compile) their designs.
Other manufacturers include:
Microchip:
Microsemi (previously Actel), producing antifuse, flash-based, mixed-signal FPGAs; acquired by Microchip in 2018
Atmel, a second source of some Altera-compatible devices; also FPSLIC mentioned above; acquired by Microchip in 2016
Lattice Semiconductor, which manufactures low-power SRAM-based FPGAs featuring integrated configuration flash, instant-on and live reconfiguration
SiliconBlue Technologies, which provides extremely low power SRAM-based FPGAs with optional integrated nonvolatile configuration memory; acquired by Lattice in 2011
QuickLogic, which manufactures Ultra Low Power Sensor Hubs, extremely low powered, low density SRAM-based FPGAs, with display bridges MIPI & RGB inputs, MIPI, RGB and LVDS outputs
Achronix, manufacturing SRAM based FPGAS with 1.5 GHz fabric speed
In March 2010, Tabula announced their FPGA technology that uses time-multiplexed logic and interconnect that claims potential cost savings for high-density applications. On March 24, 2015, Tabula officially shut down.
On June 1, 2015, Intel announced it would acquire Altera for approximately $16.7 billion and completed the acquisition on December 30, 2015.
On October 27, 2020, AMD announced it would acquire Xilinx.
Applications
An FPGA can be used to solve any problem which is computable. This is trivially proven by the fact that FPGAs can be used to implement a soft microprocessor, such as the Xilinx MicroBlaze or Altera Nios II. Their advantage lies in that they are significantly faster for some applications because of their parallel nature and optimality in terms of the number of gates used for certain processes.
FPGAs originally began as competitors to CPLDs to implement glue logic for printed circuit boards. As their size, capabilities, and speed increased, FPGAs took over additional functions to the point where some are now marketed as full systems on chips (SoCs). Particularly with the introduction of dedicated multipliers into FPGA architectures in the late 1990s, applications which had traditionally been the sole reserve of digital signal processor hardware (DSPs) began to incorporate FPGAs instead.
Another trend in the use of FPGAs is hardware acceleration, where one can use the FPGA to accelerate certain parts of an algorithm and share part of the computation between the FPGA and a generic processor. The search engine Bing is noted for adopting FPGA acceleration for its search algorithm in 2014. , FPGAs are seeing increased use as AI accelerators including Microsoft's so-termed "Project Catapult" and for accelerating artificial neural networks for machine learning applications.
Traditionally, FPGAs have been reserved for specific vertical applications where the volume of production is small. For these low-volume applications, the premium that companies pay in hardware cost per unit for a programmable chip is more affordable than the development resources spent on creating an ASIC. , new cost and performance dynamics have broadened the range of viable applications.
The company Gigabyte Technology created an i-RAM card which used a Xilinx FPGA although a custom made chip would be cheaper if made in large quantities. The FPGA was chosen to bring it quickly to market and the initial run was only to be 1000 units making an FPGA the best choice. This device allows people to use computer RAM as a hard drive.
Other uses for FPGAs include:
Space (i.e. with radiation hardening)
Hardware security modules
Security
FPGAs have both advantages and disadvantages as compared to ASICs or secure microprocessors, concerning hardware security. FPGAs' flexibility makes malicious modifications during fabrication a lower risk. Previously, for many FPGAs, the design bitstream was exposed while the FPGA loads it from external memory (typically on every power-on). All major FPGA vendors now offer a spectrum of security solutions to designers such as bitstream encryption and authentication. For example, Altera and Xilinx offer AES encryption (up to 256-bit) for bitstreams stored in an external flash memory.
FPGAs that store their configuration internally in nonvolatile flash memory, such as Microsemi's ProAsic 3 or Lattice's XP2 programmable devices, do not expose the bitstream and do not need encryption. In addition, flash memory for a lookup table provides single event upset protection for space applications. Customers wanting a higher guarantee of tamper resistance can use write-once, antifuse FPGAs from vendors such as Microsemi.
With its Stratix 10 FPGAs and SoCs, Altera introduced a Secure Device Manager and physical unclonable functions to provide high levels of protection against physical attacks.
In 2012 researchers Sergei Skorobogatov and Christopher Woods demonstrated that FPGAs can be vulnerable to hostile intent. They discovered a critical backdoor vulnerability had been manufactured in silicon as part of the Actel/Microsemi ProAsic 3 making it vulnerable on many levels such as reprogramming crypto and access keys, accessing unencrypted bitstream, modifying low-level silicon features, and extracting configuration data.
Similar technologies
Historically, FPGAs have been slower, less energy efficient and generally achieved less functionality than their fixed ASIC counterparts. A study from 2006 showed that designs implemented on FPGAs need on average 40 times as much area, draw 12 times as much dynamic power, and run at one third the speed of corresponding ASIC implementations. More recently, FPGAs such as the Xilinx Virtex-7 or the Altera Stratix 5 have come to rival corresponding ASIC and ASSP ("Application-specific standard part", such as a standalone USB interface chip) solutions by providing significantly reduced power usage, increased speed, lower materials cost, minimal implementation real-estate, and increased possibilities for re-configuration 'on-the-fly'. A design that included 6 to 10 ASICs can now be achieved using only one FPGA. Advantages of FPGAs include the ability to re-program when already deployed (i.e. "in the field") to fix bugs, and often include shorter time to market and lower non-recurring engineering costs. Vendors can also take a middle road via FPGA prototyping: developing their prototype hardware on FPGAs, but manufacture their final version as an ASIC so that it can no longer be modified after the design has been committed. This is often also the case with new processor designs. Some FPGAs have the capability of partial re-configuration that lets one portion of the device be re-programmed while other portions continue running.
The primary differences between complex programmable logic devices (CPLDs) and FPGAs are architectural. A CPLD has a comparatively restrictive structure consisting of one or more programmable sum-of-products logic arrays feeding a relatively small number of clocked registers. As a result, CPLDs are less flexible, but have the advantage of more predictable timing delays and FPGA architectures, on the other hand, are dominated by interconnect. This makes them far more flexible (in terms of the range of designs that are practical for implementation on them) but also far more complex to design for, or at least requiring more complex electronic design automation (EDA) software. In practice, the distinction between FPGAs and CPLDs is often one of size as FPGAs are usually much larger in terms of resources than CPLDs. Typically only FPGAs contain more complex embedded functions such as adders, multipliers, memory, and serializer/deserializers. Another common distinction is that CPLDs contain embedded flash memory to store their configuration while FPGAs usually require external non-volatile memory (but not always). When a design requires simple instant-on (logic is already configured at power-up) CPLDs are generally preferred. For most other applications FPGAs are generally preferred. Sometimes both CPLDs and FPGAs are used in a single system design. In those designs, CPLDs generally perform glue logic functions, and are responsible for "booting" the FPGA as well as controlling reset and boot sequence of the complete circuit board. Therefore, depending on the application it may be judicious to use both FPGAs and CPLDs in a single design.
See also
FPGA Mezzanine Card
FPGA prototyping
List of HDL simulators
List of Xilinx FPGAs
Verilog
SystemVerilog
VHDL
Hardware acceleration
References
Further reading
Mencer, Oskar et al. (2020). "The history, status, and future of FPGAs". Communications of the ACM. ACM. Vol. 63, No. 10. doi:10.1145/3410669
External links
Integrated circuits
Semiconductor devices
American inventions
Hardware acceleration |
10997 | https://en.wikipedia.org/wiki/Freenet | Freenet | Freenet is a peer-to-peer platform for censorship-resistant, anonymous communication. It uses a decentralized distributed data store to keep and deliver information, and has a suite of free software for publishing and communicating on the Web without fear of censorship. Both Freenet and some of its associated tools were originally designed by Ian Clarke, who defined Freenet's goal as providing freedom of speech on the Internet with strong anonymity protection.
The distributed data store of Freenet is used by many third-party programs and plugins to provide microblogging and media sharing, anonymous and decentralised version tracking, blogging, a generic web of trust for decentralized spam resistance, Shoeshop for using Freenet over sneakernet, and many more.
History
The origin of Freenet can be traced to Ian Clarke's student project at the University of Edinburgh, which he completed as a graduation requirement in the summer of 1999. Ian Clarke's resulting unpublished report "A distributed decentralized information storage and retrieval system" (1999) provided foundation for the seminal paper written in collaboration with other researchers, "Freenet: A Distributed Anonymous Information Storage and Retrieval System" (2001). According to CiteSeer, it became one of the most frequently cited computer science articles in 2002.
Freenet can provide anonymity on the Internet by storing small encrypted snippets of content distributed on the computers of its users and connecting only through intermediate computers which pass on requests for content and sending them back without knowing the contents of the full file, similar to how routers on the Internet route packets without knowing anything about files—except Freenet has caching, a layer of strong encryption, and no reliance on centralized structures. This allows users to publish anonymously or retrieve various kinds of information.
Freenet has been under continuous development since 2000.
Freenet 0.7, released on 8 May 2008, is a major re-write incorporating a number of fundamental changes. The most fundamental change is support for darknet operation. Version 0.7 offered two modes of operation: a mode in which it connects only to friends, and an opennet-mode in which it connects to any other Freenet user. Both modes can be run simultaneously. When a user switches to pure darknet operation, Freenet becomes very difficult to detect from the outside. The transport layer created for the darknet mode allows communication over restricted routes as commonly found in mesh networks, as long as these connections follow a small-world structure. Other modifications include switching from TCP to UDP, which allows UDP hole punching along with faster transmission of messages between peers in the network.
Freenet 0.7.5, released on 12 June 2009, offers a variety of improvements over 0.7. These include reduced memory usage, faster insert and retrieval of content, significant improvements to the FProxy web interface used for browsing freesites, and a large number of smaller bugfixes, performance enhancements, and usability improvements. Version 0.7.5 also shipped with a new version of the Windows installer.
As of build 1226, released on 30 July 2009, features that have been written include significant security improvements against both attackers acting on the network and physical seizure of the computer running the node.
As of build 1468, released on 11 July 2015, the Freenet core stopped using the db4o database and laid the foundation for an efficient interface to the Web of Trust plugin which provides spam resistance.
Freenet has always been free software, but until 2011 it required users to install Java. This problem was solved by making Freenet compatible with OpenJDK, a free and open source implementation of the Java Platform.
On 11 February 2015, Freenet received the SUMA-Award for "protection against total surveillance".
Features and user interface
Freenet served as the model for the Japanese peer to peer file-sharing programs Winny, Share and Perfect Dark, but this model differs from p2p networks such as Bittorrent and emule. Freenet separates the underlying network structure and protocol from how users interact with the network; as a result, there are a variety of ways to access content on the Freenet network. The simplest is via FProxy, which is integrated with the node software and provides a web interface to content on the network. Using FProxy, a user can browse freesites (websites that use normal HTML and related tools, but whose content is stored within Freenet rather than on a traditional web server). The web interface is also used for most configuration and node management tasks. Through the use of separate applications or plugins loaded into the node software, users can interact with the network in other ways, such as forums similar to web forums or Usenet or interfaces more similar to traditional P2P "filesharing" interfaces.
While Freenet provides an HTTP interface for browsing freesites, it is not a proxy for the World Wide Web; Freenet can be used to access only the content that has been previously inserted into the Freenet network. In this way, it is more similar to Tor's onion services than to anonymous proxy software like Tor's proxy.
Freenet's focus lies on free speech and anonymity. Because of that, Freenet acts differently at certain points that are (directly or indirectly) related to the anonymity part. Freenet attempts to protect the anonymity of both people inserting data into the network (uploading) and those retrieving data from the network (downloading). Unlike file sharing systems, there is no need for the uploader to remain on the network after uploading a file or group of files. Instead, during the upload process, the files are broken into chunks and stored on a variety of other computers on the network. When downloading, those chunks are found and reassembled. Every node on the Freenet network contributes storage space to hold files and bandwidth that it uses to route requests from its peers.
As a direct result of the anonymity requirements, the node requesting content does not normally connect directly to the node that has it; instead, the request is routed across several intermediaries, none of which know which node made the request or which one had it. As a result, the total bandwidth required by the network to transfer a file is higher than in other systems, which can result in slower transfers, especially for infrequently accessed content.
Since version 0.7, Freenet offers two different levels of security: opennet and darknet. With opennet, users connect to arbitrary other users. With darknet, users connect only to "friends" with whom they previously exchanged public keys, named node-references. Both modes can be used together.
Content
Freenet's founders argue that true freedom of speech comes only with true anonymity and that the beneficial uses of Freenet outweigh its negative uses. Their view is that free speech, in itself, is not in contradiction with any other consideration—the information is not the crime. Freenet attempts to remove the possibility of any group imposing its beliefs or values on any data. Although many states censor communications to different extents, they all share one commonality in that a body must decide what information to censor and what information to allow. What may be acceptable to one group of people may be considered offensive or even dangerous to another. In essence, the purpose of Freenet is to ensure that no one is allowed to decide what is acceptable.
Reports of Freenet's use in authoritarian nations is difficult to track due to the very nature of Freenet's goals. One group, Freenet China, used to introduce the Freenet software to Chinese users starting from 2001 and distribute it within China through e-mails and on disks after the group's website was blocked by the Chinese authorities on the mainland. It was reported that in 2002 Freenet China had several thousand dedicated users. However, Freenet opennet traffic was blocked in China around the 2010s.
Technical design
The Freenet file sharing network stores documents and allows them to be retrieved later by an associated key, as is now possible with protocols such as HTTP. The network is designed to be highly survivable. The system has no central servers and is not subject to the control of any one individual or organization, including the designers of Freenet. The codebase size is over 192.000 lines of code. Information stored on Freenet is distributed around the network and stored on several different nodes. Encryption of data and relaying of requests makes it difficult to determine who inserted content into Freenet, who requested that content, or where the content was stored. This protects the anonymity of participants, and also makes it very difficult to censor specific content. Content is stored encrypted, making it difficult for even the operator of a node to determine what is stored on that node. This provides plausible deniability; which, in combination with request relaying, means that safe harbor laws that protect service providers may also protect Freenet node operators. When asked about the topic, Freenet developers defer to the EFF discussion which says that not being able to filter anything is a safe choice.
Distributed storage and caching of data
Like Winny, Share and Perfect Dark, Freenet not only transmits data between nodes but actually stores them, working as a huge distributed cache. To achieve this, each node allocates some amount of disk space to store data; this is configurable by the node operator, but is typically several GB (or more).
Files on Freenet are typically split into multiple small blocks, with duplicate blocks created to provide redundancy. Each block is handled independently, meaning that a single file may have parts stored on many different nodes.
Information flow in Freenet is different from networks like eMule or BitTorrent; in Freenet:
A user wishing to share a file or update a freesite "inserts" the file "to the network"
After "insertion" is finished, the publishing node is free to shut down, because the file is stored in the network. It will remain available for other users whether or not the original publishing node is online. No single node is responsible for the content; instead, it is replicated to many different nodes.
Two advantages of this design are high reliability and anonymity. Information remains available even if the publisher node goes offline, and is anonymously spread over many hosting nodes as encrypted blocks, not entire files.
The key disadvantage of the storage method is that no one node is responsible for any chunk of data. If a piece of data is not retrieved for some time and a node keeps getting new data, it will drop the old data sometime when its allocated disk space is fully used. In this way Freenet tends to 'forget' data which is not retrieved regularly (see also Effect).
While users can insert data into the network, there is no way to delete data. Due to Freenet's anonymous nature the original publishing node or owner of any piece of data is unknown. The only way data can be removed is if users don't request it.
Network
Typically, a host computer on the network runs the software that acts as a node, and it connects to other hosts running that same software to form a large distributed, variable-size network of peer nodes. Some nodes are end user nodes, from which documents are requested and presented to human users. Other nodes serve only to route data. All nodes communicate with each other identically – there are no dedicated "clients" or "servers". It is not possible for a node to rate another node except by its capacity to insert and fetch data associated with a key. This is unlike most other P2P networks where node administrators can employ a ratio system, where users have to share a certain amount of content before they can download.
Freenet may also be considered a small world network.
The Freenet protocol is intended to be used on a network of complex topology, such as the Internet (Internet Protocol). Each node knows only about some number of other nodes that it can reach directly (its conceptual "neighbors"), but any node can be a neighbor to any other; no hierarchy or other structure is intended. Each message is routed through the network by passing from neighbor to neighbor until it reaches its destination. As each node passes a message to a neighbor, it does not know whether the neighbor will forward the message to another node, or is the final destination or original source of the message. This is intended to protect the anonymity of users and publishers.
Each node maintains a data store containing documents associated with keys, and a routing table associating nodes with records of their performance in retrieving different keys.
Protocol
The Freenet protocol uses a key-based routing protocol, similar to distributed hash tables. The routing algorithm changed significantly in version 0.7. Prior to version 0.7, Freenet used a heuristic routing algorithm where each node had no fixed location, and routing was based on which node had served a key closest to the key being fetched (in version 0.3) or which is estimated to serve it faster (in version 0.5). In either case, new connections were sometimes added to downstream nodes (i.e. the node that answered the request) when requests succeeded, and old nodes were discarded in least recently used order (or something close to it). Oskar Sandberg's research (during the development of version 0.7) shows that this "path folding" is critical, and that a very simple routing algorithm will suffice provided there is path folding.
The disadvantage of this is that it is very easy for an attacker to find Freenet nodes, and connect to them, because every node is continually attempting to find new connections. In version 0.7, Freenet supports both "opennet" (similar to the old algorithms, but simpler), and "darknet" (all node connections are set up manually, so only your friends know your node's IP address). Darknet is less convenient, but much more secure against a distant attacker.
This change required major changes in the routing algorithm. Every node has a location, which is a number between 0 and 1. When a key is requested, first the node checks the local data store. If it's not found, the key's hash is turned into another number in the same range, and the request is routed to the node whose location is closest to the key. This goes on until some number of hops is exceeded, there are no more nodes to search, or the data is found. If the data is found, it is cached on each node along the path. So there is no one source node for a key, and attempting to find where it is currently stored will result in it being cached more widely. Essentially the same process is used to insert a document into the network: the data is routed according to the key until it runs out of hops, and if no existing document is found with the same key, it is stored on each node. If older data is found, the older data is propagated and returned to the originator, and the insert "collides".
But this works only if the locations are clustered in the right way. Freenet assumes that the darknet (a subset of the global social network) is a small-world network, and nodes constantly attempt to swap locations (using the Metropolis–Hastings algorithm) in order to minimize their distance to their neighbors. If the network actually is a small-world network, Freenet should find data reasonably quickly; ideally on the order of hops in Big O notation. However, it does not guarantee that data will be found at all.
Eventually, either the document is found or the hop limit is exceeded. The terminal node sends a reply that makes its way back to the originator along the route specified by the intermediate nodes' records of pending requests. The intermediate nodes may choose to cache the document along the way. Besides saving bandwidth, this also makes documents harder to censor as there is no one "source node".
Effect
Initially, the locations in darknet are distributed randomly. This means that routing of requests is essentially random. In opennet connections are established by a join request which provides an optimized network structure if the existing network is already optimized. So the data in a newly started Freenet will be distributed somewhat randomly.
As location swapping (on darknet) and path folding (on opennet) progress, nodes which are close to one another will increasingly have close locations, and nodes which are far away will have distant locations. Data with similar keys will be stored on the same node.
The result is that the network will self-organize into a distributed, clustered structure where nodes tend to hold data items that are close together in key space. There will probably be multiple such clusters throughout the network, any given document being replicated numerous times, depending on how much it is used. This is a kind of "spontaneous symmetry breaking", in which an initially symmetric state (all nodes being the same, with random initial keys for each other) leads to a highly asymmetric situation, with nodes coming to specialize in data that has closely related keys.
There are forces which tend to cause clustering (shared closeness data spreads throughout the network), and forces that tend to break up clusters (local caching of commonly used data). These forces will be different depending on how often data is used, so that seldom-used data will tend to be on just a few nodes which specialize in providing that data, and frequently used items will be spread widely throughout the network. This automatic mirroring counteracts the times when web traffic becomes overloaded, and due to a mature network's intelligent routing, a network of size n should require only log(n) time to retrieve a document on average.
Keys
Keys are hashes: there is no notion of semantic closeness when speaking of key closeness. Therefore, there will be no correlation between key closeness and similar popularity of data as there might be if keys did exhibit some semantic meaning, thus avoiding bottlenecks caused by popular subjects.
There are two main varieties of keys in use on Freenet, the Content Hash Key (CHK) and the Signed Subspace Key (SSK). A subtype of SSKs is the Updatable Subspace Key (USK) which adds versioning to allow secure updating of content.
A CHK is a SHA-256 hash of a document (after encryption, which itself depends on the hash of the plaintext) and thus a node can check that the document returned is correct by hashing it and checking the digest against the key. This key contains the meat of the data on Freenet. It carries all the binary data building blocks for the content to be delivered to the client for reassembly and decryption. The CHK is unique by nature and provides tamperproof content. A hostile node altering the data under a CHK will immediately be detected by the next node or the client. CHKs also reduce the redundancy of data since the same data will have the same CHK and when multiple sites reference the same large files, they can reference to the same CHK.
SSKs are based on public-key cryptography. Currently Freenet uses the DSA algorithm. Documents inserted under SSKs are signed by the inserter, and this signature can be verified by every node to ensure that the data is not tampered with. SSKs can be used to establish a verifiable pseudonymous identity on Freenet, and allow for multiple documents to be inserted securely by a single person. Files inserted with an SSK are effectively immutable, since inserting a second file with the same name can cause collisions. USKs resolve this by adding a version number to the keys which is also used for providing update notification for keys registered as bookmarks in the web interface. Another subtype of the SSK is the Keyword Signed Key, or KSK, in which the key pair is generated in a standard way from a simple human-readable string. Inserting a document using a KSK allows the document to be retrieved and decrypted if and only if the requester knows the human-readable string; this allows for more convenient (but less secure) URIs for users to refer to.
Scalability
A network is said to be scalable if its performance does not deteriorate even if the network is very large. The scalability of Freenet is being evaluated, but similar architectures have been shown to scale logarithmically. This work indicates that Freenet can find data in hops on a small-world network (which includes both opennet and darknet style Freenet networks), when ignoring the caching which could improve the scalability for popular content. However, this scalability is difficult to test without a very large network. Furthermore, the security features inherent to Freenet make detailed performance analysis (including things as simple as determining the size of the network) difficult to do accurately. As of now, the scalability of Freenet has yet to be tested.
Darknet versus opennet
As of version 0.7, Freenet supports both "darknet" and "opennet" connections. Opennet connections are made automatically by nodes with opennet enabled, while darknet connections are manually established between users that know and trust each other. Freenet developers describe the trust needed as "will not crack their Freenet node". Opennet connections are easy to use, but darknet connections are more secure against attackers on the network, and can make it difficult for an attacker (such as an oppressive government) to even determine that a user is running Freenet in the first place.
The core innovation in Freenet 0.7 is to allow a globally scalable darknet, capable (at least in theory) of supporting millions of users. Previous darknets, such as WASTE, have been limited to relatively small disconnected networks. The scalability of Freenet is made possible by the fact that human relationships tend to form small-world networks, a property that can be exploited to find short paths between any two people. The work is based on a speech given at DEF CON 13 by Ian Clarke and Swedish mathematician Oskar Sandberg. Furthermore, the routing algorithm is capable of routing over a mixture of opennet and darknet connections, allowing people who have only a few friends using the network to get the performance from having sufficient connections while still receiving some of the security benefits of darknet connections. This also means that small darknets where some users also have opennet connections are fully integrated into the whole Freenet network, allowing all users access to all content, whether they run opennet, darknet, or a hybrid of the two, except for darknet pockets connected only by a single hybrid node.
Tools and applications
Unlike many other P2P applications Freenet does not provide comprehensive functionality itself. Freenet is modular and features an API called Freenet Client Protocol (FCP) for other programs to use to implement services such as message boards, file sharing, or online chat.
Communication
Freenet Messaging System (FMS)
FMS was designed to address problems with Frost such as denial of service attacks and spam. Users publish trust lists, and each user downloads messages only from identities they trust and identities trusted by identities they trust. FMS is developed anonymously and can be downloaded from the FMS freesite within Freenet. It does not have an official site on the normal Internet. It features random post delay, support for many identities, and a distinction between trusting a user's posts and trusting their trust list. It is written in C++ and is a separate application from Freenet which uses the Freenet Client Protocol (FCP) to interface with Freenet.
Frost
Frost includes support for convenient file sharing, but its design is inherently vulnerable to spam and denial of service attacks. Frost can be downloaded from the Frost home page on SourceForge, or from the Frost freesite within Freenet. It is not endorsed by the Freenet developers. Frost is written in Java and is a separate application from Freenet.
Sone
Sone provides a simpler interface inspired by Facebook with public anonymous discussions and image galleries. It provides an API for control from other programs is also used to implement a comment system for static websites in the regular internet.
Utilities
jSite
jSite is a tool to upload websites. It handles keys and manages uploading files.
Infocalypse
Infocalypse is an extension for the distributed revision control system Mercurial. It uses an optimized structure to minimize the number of requests to retrieve new data, and allows supporting a repository by securely reuploading most parts of the data without requiring the owner's private keys.
Libraries
FCPLib
FCPLib (Freenet Client Protocol Library) aims to be a cross-platform natively compiled set of C++-based functions for storing and retrieving information to and from Freenet. FCPLib supports Windows NT/2K/XP, Debian, BSD, Solaris, and macOS.
lib-pyFreenet
lib-pyFreenet exposes Freenet functionality to Python programs. Infocalypse uses it.
Vulnerabilities
Law enforcement agencies have claimed to have successfully infiltrated Freenet opennet in order to deanonymize users but no technical details have been given to support these allegations. One report stated that, "A child-porn investigation focused on ... [the suspect] when the authorities were monitoring the online network, Freenet." A different report indicated arrests may have been based on the BlackICE project leaks, that are debunked for using bad math and for using an incorrectly calculated false positives rate and a false model.
A court case in the Peel Region of Ontario, Canada R. v. Owen, 2017 ONCJ 729 (CanLII), illustrated that law enforcement do in fact have a presence, after Peel Regional Police located who had been downloading illegal material on the Freenet network. The court decision indicates that a Canadian Law Enforcement agency operates nodes running modified Freenet software in the hope of determining who is requesting illegal material.
Routing Table Insertion (RTI) Attack.
Notability
Freenet has had significant publicity in the mainstream press, including articles in The New York Times, and coverage on CNN, 60 Minutes II, the BBC, The Guardian, and elsewhere.
Freenet received the SUMA-Award 2014 for "protection against total surveillance".
Freesite
A "freesite" is a site hosted on the Freenet network. Because it contains only static content, it cannot contain any active content like server side scripts or databases. Freesites are coded in HTML and support as many features as the browser viewing the page allows; however, there are some exceptions where the Freenet software will remove parts of the code that may be used to reveal the identity of the person viewing the page (making a page access something on the internet, for example).
See also
Peer-to-peer web hosting
Rendezvous protocol
Anonymous P2P
Crypto-anarchism
Cypherpunk
Distributed file system
Freedom of information
Friend-to-friend
Comparable software
GNUnet
I2P
Java Anon Proxy (also known as JonDonym)
Osiris
Perfect Dark – also creates a distributed data store shared by anonymous nodes; the successor to Share, which itself is the successor of Winny.
Tahoe-LAFS
ZeroNet
References
Further reading
External links
Free file transfer software
Free file sharing software
Distributed file systems
Anonymous file sharing networks
Anonymity networks
Application layer protocols
Distributed data storage systems
Distributed data storage
Distributed data structures
File sharing
Free software programmed in Java (programming language)
Cross-platform software
Beta software
2000 introductions
Key-based routing
Overlay networks
Mix networks |
11026 | https://en.wikipedia.org/wiki/List%20of%20programmers | List of programmers | This is a list of programmers notable for their contributions to software, either as original author or architect, or for later additions. All entries must already have associated articles.
A
Michael Abrash – program optimization and x86 assembly language
Scott Adams – one of earliest developers of CP/M and DOS games
Tarn Adams – created Dwarf Fortress
Leonard Adleman – cocreated RSA algorithm (being the A in that name), coined the term computer virus
Alfred Aho – cocreated AWK (being the A in that name), and main author of famous Compilers: Principles, Techniques, and Tools (Dragon book)
Andrei Alexandrescu – author, expert on languages C++, D
Paul Allen – Altair BASIC, Applesoft BASIC, cofounded Microsoft
Eric Allman – sendmail, syslog
Marc Andreessen – cocreated Mosaic, cofounded Netscape
Jeremy Ashkenas – created CoffeeScript programming language and Backbone.js
Bill Atkinson – QuickDraw, HyperCard
B
Roland Carl Backhouse – computer program construction, algorithmic problem solving, ALGOL
John Backus – Fortran, BNF
Lars Bak – virtual machine specialist
Richard Bartle – MUD, with Roy Trubshaw, created MUDs
Friedrich L. Bauer – Stack (data structure), Sequential Formula Translation, ALGOL, software engineering, Bauer–Fike theorem
Kent Beck – created Extreme programming, cocreated JUnit
Donald Becker – Linux Ethernet drivers, Beowulf clustering
Brian Behlendorf – Apache HTTP Server
Doug Bell – Dungeon Master series of video games
Fabrice Bellard – created FFmpeg open codec library, QEMU virtualization tools
Tim Berners-Lee – invented World Wide Web
Daniel J. Bernstein – djbdns, qmail
Eric Bina – cocreated Mosaic web browser
Marc Blank – cocreated Zork
Joshua Bloch – core Java language designer, lead the Java collections framework project
Jonathan Blow – video game designer and programmed Braid and The Witness
Susan G. Bond – cocreated ALGOL 68-R
Grady Booch – cocreated Unified Modeling Language
Bert Bos – authored Argo web browser, co-authored Cascading Style Sheets
Stephen R. Bourne – cocreated ALGOL 68C, created Bourne shell
David Bradley – coder on the IBM PC project team who wrote the Control-Alt-Delete keyboard handler, embedded in all PC-compatible BIOSes
Andrew Braybrook – video games Paradroid and Uridium
Larry Breed – implementation of Iverson Notation (APL), co-developed APL\360, Scientific Time Sharing Corporation cofounder
Jack Elton Bresenham – created Bresenham's line algorithm
Dan Bricklin – cocreated VisiCalc, the first personal spreadsheet program
Walter Bright – Digital Mars, First C++ compiler, authored D (programming language)
Sergey Brin – cofounded Google Inc.
Per Brinch Hansen (surname "Brinch Hansen") – RC 4000 multiprogramming system, operating system kernels, microkernels, monitors, concurrent programming, Concurrent Pascal, distributed computing & processes, parallel computing
Richard Brodie – Microsoft Word
Andries Brouwer – Hack, former maintainer of man pager, Linux kernel hacker
Danielle Bunten Berry (Dani Bunten) – M.U.L.E., multiplayer video game and other noted video games
Dries Buytaert – created Drupal
C
Steve Capps – cocreated Macintosh and Newton
John Carmack – first-person shooters Doom, Quake
Vint Cerf – TCP/IP, NCP
Ward Christensen – wrote the first BBS (Bulletin Board System) system CBBS
Edgar F. Codd – principal architect of relational model
Bram Cohen – BitTorrent protocol design and implementation
Alain Colmerauer – Prolog
Alan Cooper – Visual Basic
Mike Cowlishaw – REXX and NetRexx, LEXX editor, image processing, decimal arithmetic packages
Alan Cox – co-developed Linux kernel
Brad Cox – Objective-C
Mark Crispin – created IMAP, authored UW-IMAP, one of reference implementations of IMAP4
William Crowther – Colossal Cave Adventure
Ward Cunningham – created Wiki concept
Dave Cutler – architected RSX-11M, OpenVMS, VAXELN, DEC MICA, Windows NT
D
Ole-Johan Dahl – cocreated Simula, object-oriented programming
Ryan Dahl – created Node.js
James Duncan Davidson – created Tomcat, now part of Jakarta Project
Terry A. Davis – developer of TempleOS
Jeff Dean – Spanner, Bigtable, MapReduce
L. Peter Deutsch – Ghostscript, Assembler for PDP-1, XDS-940 timesharing system, QED original co-author
Robert Dewar – IFIP WG 2.1 member, chairperson, ALGOL 68; AdaCore cofounder, president, CEO
Edsger W. Dijkstra – contributions to ALGOL, Dijkstra's algorithm, Go To Statement Considered Harmful, IFIP WG 2.1 member
Matt Dillon – programmed various software including DICE and DragonflyBSD
Jack Dorsey – created Twitter
Martin Dougiamas – creator and lead developed Moodle
Adam Dunkels – authored Contiki operating system, the lwIP and uIP embedded TCP/IP stacks, invented protothreads
E
Les Earnest – authored finger program
Alan Edelman – Edelman's Law, stochastic operator, Interactive Supercomputing, Julia (programming language) cocreator, high performance computing, numerical computing
Brendan Eich – created JavaScript
Larry Ellison – cocreated Oracle Database, cofounded Oracle Corporation
Andrey Ershov – languages ALPHA, Rapira; first Soviet time-sharing system AIST-0, electronic publishing system RUBIN, multiprocessing workstation MRAMOR, IFIP WG 2.1 member, Aesthetics and the Human Factor in Programming
Marc Ewing – created Red Hat Linux
F
Scott Fahlman – created smiley face emoticon :-)
Dan Farmer – created COPS and Security Administrator Tool for Analyzing Networks (SATAN) Security Scanners
Steve Fawkner – created Warlords and Puzzle Quest
Stuart Feldman – created make, authored Fortran 77 compiler, part of original group that created Unix
David Filo – cocreated Yahoo!
Brad Fitzpatrick – created memcached, Livejournal and OpenID
Andrew Fluegelman – author PC-Talk communications software; considered a cocreated shareware
Martin Fowler – created Dependency Injection pattern of software engineering, a form of Inversion of control
Brian Fox – created Bash, Readline, GNU Finger
G
Elon Gasper – cofounded Bright Star Technology, patented realistic facial movements for in-game speech; HyperAnimator, Alphabet Blocks, etc.
Bill Gates – Altair BASIC, cofounded Microsoft
Nick Gerakines – author, contributor to open-source Erlang projects
Jim Gettys – X Window System, HTTP/1.1, One Laptop per Child, Bufferbloat
Steve Gibson – created SpinRite
John Gilmore – GNU Debugger (GDB)
Adele Goldberg – cocreated Smalltalk
Ryan C. Gordon (a.k.a. Icculus) – Lokigames, ioquake3
James Gosling – Java, Gosling Emacs, NeWS
Bill Gosper – Macsyma, Lisp machine, hashlife, helped Donald Knuth on Vol.2 of The Art of Computer Programming (Semi-numerical algorithms)
Paul Graham – Yahoo! Store, On Lisp, ANSI Common Lisp
John Graham-Cumming – authored POPFile, a Bayesian filter-based e-mail classifier
Ralph Griswold – cocreated SNOBOL, created Icon (programming language)
Richard Greenblatt – Lisp machine, Incompatible Timesharing System, MacHack
Neil J. Gunther – authored Pretty Damn Quick (PDQ) performance modeling program
Scott Guthrie (a.k.a. ScottGu) – ASP.NET creator
Jürg Gutknecht – with Niklaus Wirth: Lilith computer; Modula-2, Oberon, Zonnon programming languages; Oberon operating system
Andi Gutmans – cocreated PHP programming language
Michael Guy – Phoenix, work on number theory, computer algebra, higher dimension polyhedra theory, ALGOL 68C; work with John Horton Conway
H
Daniel Ha – cofounder and CEO of blog comment platform Disqus
Nico Habermann – work on operating systems, software engineering, inter-process communication, process synchronization, deadlock avoidance, software verification, programming languages: ALGOL 60, BLISS, Pascal, Ada
Jim Hall – started the FreeDOS project
Margaret Hamilton – Director of Software Engineering Division of MIT Instrumentation Laboratory, which developed on-board flight software for the space Apollo program
Eric Hehner – predicative programming, formal methods, quote notation, ALGOL
David Heinemeier Hansson – created the Ruby on Rails framework for developing web applications
Rebecca Heineman – authored Bard's Tale III: Thief of Fate and Dragon Wars
Gernot Heiser – operating system teaching, research, commercialising, Open Kernel Labs, OKL4, Wombat
Anders Hejlsberg – Turbo Pascal, Borland Delphi, C#, TypeScript
Ted Henter – founded Henter-Joyce (now part of Freedom Scientific) created JAWS screen reader software for blind people
Andy Hertzfeld – cocreated Macintosh, cofounded General Magic, cofounded Eazel
D. Richard Hipp – created SQLite
C. A. R. Hoare – first implementation of quicksort, ALGOL 60 compiler, Communicating sequential processes
Louis Hodes – Lisp, pattern recognition, logic programming, cancer research
Grace Hopper – Harvard Mark I computer, FLOW-MATIC, COBOL
David A. Huffman – created the Huffman Code compression algorithm
Roger Hui – created J
Dave Hyatt – co-authored Mozilla Firefox
P. J. Hyett – cofounded GitHub
I
Miguel de Icaza – GNOME project leader, initiated Mono project
Roberto Ierusalimschy – Lua leading architect
Dan Ingalls – cocreated Smalltalk and Bitblt
Geir Ivarsøy – cocreated Opera web browser
Ken Iverson – APL, J
Toru Iwatani – created Pac-Man
J
Bo Jangeborg – Sinclair ZX Spectrum games
Paul Jardetzky – authored server program for the first webcam
Stephen C. Johnson – yacc
Lynne Jolitz – 386BSD
William Jolitz – 386BSD
Bill Joy – BSD, csh, vi, cofounded Sun Microsystems
Robert K. Jung – created ARJ
K
Poul-Henning Kamp – MD5 password hash algorithm, FreeBSD GEOM and GBDE, part of UFS2, FreeBSD Jails, malloc and the Beerware license
Mitch Kapor – Lotus 1-2-3, founded Lotus Development Corporation
Phil Katz – created Zip (file format), authored PKZIP
Ted Kaehler – contributions to Smalltalk, Squeak, HyperCard
Alan Kay – Smalltalk, Dynabook, Object-oriented programming, Squeak
Mel Kaye – LGP-30 and RPC-4000 machine code programmer at Royal McBee in the 1950s, famed as "Real Programmer" in the Story of Mel
Stan Kelly-Bootle – Manchester Mark 1, The Devil's DP Dictionary
John Kemeny – cocreated BASIC
Brian Kernighan – cocreated AWK (being the K in that name), authored ditroff text-formatting tool
Gary Kildall – CP/M, MP/M, BIOS, PL/M, also known for work on data-flow analysis, binary recompilers, multitasking operating systems, graphical user interfaces, disk caching, CD-ROM file system and data structures, early multi-media technologies, founded Digital Research (DRI)
Tom Knight – Incompatible Timesharing System
Jim Knopf – a.k.a. Jim Button, author PC-File flatfile database; cocreated shareware
Donald E. Knuth – TeX, CWEB, Metafont, The Art of Computer Programming, Concrete Mathematics
Andrew R. Koenig – co-authored books on C and C++ and former Project Editor of ISO/ANSI standards committee for C++
Cornelis H. A. Koster – Report on the Algorithmic Language ALGOL 68, ALGOL 68 transput
L
Andre LaMothe – created XGameStation, one of world's first video game console development kits
Leslie Lamport – LaTeX
Butler Lampson – QED original co-author
Peter Landin – ISWIM, J operator, SECD machine, off-side rule, syntactic sugar, ALGOL, IFIP WG 2.1 member
Tom Lane – main author of libjpeg, major developer of PostgreSQL
Sam Lantinga – created Simple DirectMedia Layer (SDL)
Dick Lathwell – codeveloped APL\360
Chris Lattner – main author of LLVM project
Samuel J Leffler – BSD, FlexFAX, libtiff, FreeBSD Wireless Device Drivers
Rasmus Lerdorf – original created PHP
Michael Lesk – Lex
Gordon Letwin – architected OS/2, authored High Performance File System (HPFS)
Jochen Liedtke – microkernel operating systems Eumel, L3, L4
Charles H. Lindsey – IFIP WG 2.1 member, Revised Report on ALGOL 68
Håkon Wium Lie – co-authored Cascading Style Sheets
Yanhong Annie Liu – programming languages, algorithms, program design, program optimization, software systems, optimizing, analysis, and transformations, intelligent systems, distributed computing, computer security, IFIP WG 2.1 member
Robert Love – Linux kernel developer
Ada Lovelace – first programmer (of Charles Babbages' Analytical Engine)
Al Lowe – created Leisure Suit Larry series
David Luckham – Lisp, Automated theorem proving, Stanford Pascal Verifier, Complex event processing, Rational Software cofounder (Ada compiler)
Hans Peter Luhn – hash-coding, linked list, searching and sorting binary tree
M
Khaled Mardam-Bey – created mIRC (Internet Relay Chat Client)
Robert C. Martin – authored Clean Code, The Clean Coder, leader of Clean Code movement, signatory on the Agile Manifesto
John Mashey – authored PWB shell, also called Mashey shell
Yukihiro Matsumoto – Ruby
John McCarthy – Lisp, ALGOL, IFIP WG 2.1 member, artificial intelligence
Craig McClanahan – original author Jakarta Struts, architect of Tomcat Catalina servlet container
Daniel D. McCracken – professor at City College and authored Guide to Algol Programming, Guide to Cobol Programming, Guide to Fortran Programming (1957)
Scott A. McGregor – architect and development team lead of Microsoft Windows 1.0, co-authored X Window System version 11, and developed Cedar Viewers Windows System at Xerox PARC
Douglas McIlroy – macros, pipes and filters, concept of software componentry, Unix tools (spell, diff, sort, join, graph, speak, tr, etc.)
Marshall Kirk McKusick – Berkeley Software Distribution (BSD), work on FFS, implemented soft updates
Sid Meier – author, Civilization and Railroad Tycoon, cofounded Microprose
Bertrand Meyer – Eiffel, Object-oriented Software Construction, design by contract
Bob Miner – cocreated Oracle Database, cofounded Oracle Corporation
Jeff Minter – psychedelic, and often llama-related video games
James G. Mitchell – WATFOR compiler, Mesa (programming language), Spring (operating system), ARM architecture
Arvind Mithal – formal verification of large digital systems, developing dynamic dataflow architectures, parallel computing programming languages (Id, pH), compiling on parallel machines
Petr Mitrichev – competitive programmer
Cleve Moler – co-authored LINPACK, EISPACK, and MATLAB
Lou Montulli – created Lynx browser, cookies, the blink tag, server push and client pull, HTTP proxying, HTTP over SSL, browser integration with animated GIFs, founding member of HTML working group at W3C
Bram Moolenaar – authored text-editor Vim
David A. Moon – Maclisp, ZetaLisp
Charles H. Moore – created Forth language
Roger Moore – co-developed APL\360, created IPSANET, cofounded I. P. Sharp Associates
Matt Mullenweg – authored WordPress
Boyd Munro – Australian developed GRASP, owns SDI, one of earliest software development companies
Mike Muuss – authored ping, network tool to detect hosts
N
Patrick Naughton – early Java designer, HotJava
Peter Naur (1928–2016) – Backus–Naur form (BNF), ALGOL 60, IFIP WG 2.1 member
Fredrik Neij – cocreated The Pirate Bay
Graham Nelson – created Inform authoring system for interactive fiction
Greg Nelson (1953–2015) – satisfiability modulo theories, extended static checking, program verification, Modula-3 committee, Simplify theorem prover in ESC/Java
Klára Dán von Neumann (1911–1963) – principal programmer for the MANIAC I
Maurice Nivat (1937–2017) – theoretical computer science, Theoretical Computer Science journal, ALGOL, IFIP WG 2.1 member
Phiwa Nkambule – cofounded Riovic, founded Cybatar
Peter Norton – programmed Norton Utilities
Kristen Nygaard (1926–2002) – Simula, object-oriented programming
O
Ed Oates – cocreated Oracle Database, cofounded Oracle Corporation
Martin Odersky – Scala
Peter O'Hearn – separation logic, bunched logic, Infer Static Analyzer
Jarkko Oikarinen – created Internet Relay Chat (IRC)
Andrew and Philip Oliver, the Oliver Twins – many Sinclair ZX Spectrum games including Dizzy
John Ousterhout – created Tcl/Tk
P
Keith Packard – X Window System
Larry Page – cofounded Google, Inc.
Alexey Pajitnov – created game Tetris on Electronica 60
Seymour Papert – Logo (programming language)
David Park (1935–1990) – first Lisp implementation, expert in fairness, program schemas, bisimulation in concurrent computing
Mike Paterson – algorithms, analysis of algorithms (complexity)
Tim Paterson – authored 86-DOS (QDOS)
Markus Persson – created Minecraft
Jeffrey Peterson – key free and open-source software architect, created Quepasa
Charles Petzold – authored many Microsoft Windows programming books
Rob Pike – wrote first bitmapped window system for Unix, cocreated UTF-8 character encoding, authored text editor sam and programming environment acme, main author of Plan 9 and Inferno operating systems, and co-authored Go programming language
Kent Pitman – technical contributor to the ANSI Common Lisp standard
Tom Preston-Werner – cofounded GitHub
Q
R
Theo de Raadt – founding member of NetBSD, founded OpenBSD and OpenSSH
Brian Randell – ALGOL 60, software fault tolerance, dependability, pre-1950 history of computing hardware
Jef Raskin – started the Macintosh project in Apple Computer, designed Canon Cat computer, developed Archy (The Humane Environment) program
Eric S. Raymond – Open Source movement, authored fetchmail
Hans Reiser – created ReiserFS file system
John Resig – creator and lead developed jQuery JavaScript library
Craig Reynolds – created boids computer graphics simulation
John C. Reynolds – continuations, definitional interpreters, defunctionalization, Forsythe, Gedanken language, intersection types, polymorphic lambda calculus, relational parametricity, separation logic, ALGOL
Reinder van de Riet – Editor: Europe of Data and Knowledge Engineering, COLOR-X event modeling language
Dennis Ritchie – C, Unix, Plan 9 from Bell Labs, Inferno
Ron Rivest – cocreated RSA algorithm (being the R in that name). created RC4 and MD5
John Romero – first-person shooters Doom, Quake
Blake Ross – co-authored Mozilla Firefox
Douglas T. Ross – Automatically Programmed Tools (APT), Computer-aided design, structured analysis and design technique, ALGOL X
Guido van Rossum – Python
Jeff Rulifson – lead programmer on the NLS project
Rusty Russell – created iptables for linux
Steve Russell – first Lisp interpreter; original Spacewar! graphic video game
Mark Russinovich – Sysinternals.com, Filemon, Regmon, Process Explorer, TCPView and RootkitRevealer
S
Bob Sabiston – Rotoshop, interpolating rotoscope animation software
Muni Sakya – Nepalese software
Carl Sassenrath – Amiga, REBOL
Chris Sawyer – developed RollerCoaster Tycoon and the Transport Tycoon series
Cher Scarlett – Apple, Webflow, Blizzard Entertainment, World Wide Technology, and USA Today
Bob Scheifler – X Window System, Jini
Isai Scheinberg – IBM engineer, founded PokerStars
Bill Schelter – GNU Maxima, GNU Common Lisp
John Scholes – Direct functions
Randal L. Schwartz – Just another Perl hacker
Adi Shamir – cocreated RSA algorithm (being the S in that name)
Mike Shaver – founding member of Mozilla Organization
Cliff Shaw – Information Processing Language (IPL), the first AI language
Zed Shaw – wrote the Mongrel Web Server, for Ruby web applications
Emily Short – prolific writer of Interactive fiction and co-developed Inform version 7
Jacek Sieka – developed DC++ an open-source, peer-to-peer file-sharing client
Daniel Siewiorek – electronic design automation, reliability computing, context aware mobile computing, wearable computing, computer-aided design, rapid prototyping, fault tolerance
Ken Silverman – created Duke Nukem 3Ds graphics engine
Charles Simonyi – Hungarian notation, Bravo (the first WYSIWYG text editor), Microsoft Word
Colin Simpson – developed CircuitLogix simulation software
Rich Skrenta – cofounded DMOZ
Matthew Smith – Sinclair ZX Spectrum games, including Manic Miner and Jet Set WillyHenry Spencer – C News, Regex
Joel Spolsky – cofounded Fog Creek Software and Stack Overflow
Quentin Stafford-Fraser – authored original VNC viewer, first Windows VNC server, client program for the first webcam
Richard Stallman – Emacs, GNU Compiler Collection (GCC), GDB, founder and pioneer of GNU Project, terminal-independent I/O pioneer on Incompatible Timesharing System (ITS), Lisp machine manual
Guy L. Steele Jr. – Common Lisp, Scheme, Java
Alexander Stepanov – created Standard Template Library
Christopher Strachey – draughts playing program
Ludvig Strigeus – created uTorrent, OpenTTD, ScummVM and the technology behind Spotify
Bjarne Stroustrup – created C++
Zeev Suraski – cocreated PHP language
Gerald Jay Sussman – Scheme
Herb Sutter – chair of ISO C++ standards committee and C++ expert
Gottfrid Svartholm – cocreated The Pirate Bay
Aaron Swartz – software developer, writer, Internet activist
Tim Sweeney – The Unreal engine, UnrealScript, ZZT
T
Amir Taaki – leading developer for Bitcoin project
Andrew Tanenbaum – Minix
Audrey "Autrijus" Tang – designed Pugs
Simon Tatham – Netwide Assembler (NASM), PuTTY
Larry Tesler – the Smalltalk code browser, debugger and object inspector, and (with Tim Mott) the Gypsy word processor
Jon Stephenson von Tetzchner – cocreated Opera web browser
Avie Tevanian – authored Mach kernel
Ken Thompson – mainly designed and authored Unix, Plan 9 and Inferno operating systems, B and Bon languages (precursors of C), created UTF-8 character encoding, introduced regular expressions in QED and co-authored Go language
Michael Tiemann – G++, GNU Compiler Collection (GCC)
Linus Torvalds – original author and current maintainer of Linux kernel and created Git, a source code management system
Andrew Tridgell – Samba, Rsync
Roy Trubshaw – MUD – together with Richard Bartle, created MUDs
Bob Truel – cofounded DMOZ
Alan Turing – mathematician, computer scientist and cryptanalyst
David Turner – SASL, Kent Recursive Calculator, Miranda, IFIP WG 2.1 member
U
V
Wietse Venema – Postfix, Security Administrator Tool for Analyzing Networks (SATAN), TCP Wrapper
Pat Villani – original author FreeDOS/DOS-C kernel, maintainer of a defunct Linux for Windows 9x distribution
Paul Vixie – BIND, Cron
Patrick Volkerding – original author and current maintainer of Slackware Linux Distribution
W
Eiiti Wada – ALGOL N, IFIP WG 2.1 member, Japanese Industrial Standards (JIS) X 0208, 0212, Happy Hacking Keyboard
John Walker – cofounded Autodesk
Larry Wall – Warp (1980s space-war game), rn, patch, Perl
Bob Wallace – author PC-Write word processor; considered shareware cocreator
Chris Wanstrath – cofounded GitHub
John Warnock – created PostScript
Robert Watson – FreeBSD network stack parallelism, TrustedBSD project and OpenBSM
Joseph Henry Wegstein – ALGOL 58, ALGOL 60, IFIP WG 2.1 member, data processing technical standards, fingerprint analysis
Pei-Yuan Wei – authored ViolaWWW, one of earliest graphical browsers
Peter J. Weinberger – cocreated AWK (being the W in that name)
Jim Weirich – created Rake, Builder, and RubyGems for Ruby; popular teacher and conference speaker
Joseph Weizenbaum – created ELIZA
David Wheeler – cocreated subroutine; designed WAKE; co-designed Tiny Encryption Algorithm, XTEA, Burrows–Wheeler transform
Arthur Whitney – A+, K
why the lucky stiff – created libraries and writing for Ruby, including quirky, popular Why's (poignant) Guide to Ruby to teach programming
Adriaan van Wijngaarden – Dutch pioneer; ARRA, ALGOL, IFIP WG 2.1 member
Bruce Wilcox – created Computer Go, programmed NEMESIS Go Master
Evan Williams – created and cofounded language Logo
Roberta and Ken Williams – Sierra Entertainment, King's Quest, graphic adventure game
Sophie Wilson – designed instruction set for Acorn RISC Machine, authored BBC BASIC
Dave Winer – developed XML-RPC, Frontier scripting language
Niklaus Wirth – ALGOL W, IFIP WG 2.1 member, Pascal, Modula-2, Oberon
Stephen Wolfram – created Mathematica
Don Woods – INTERCAL, Colossal Cave Adventure
Philip Woodward – ambiguity function, sinc function, comb operator, rep operator, ALGOL 68-R
Steve Wozniak – Breakout'', Apple Integer BASIC, cofounded Apple Inc.
Will Wright – created the Sim City series, cofounded Maxis
William Wulf – BLISS system programming language + optimizing compiler, Hydra operating system, Tartan Laboratories
Y
Jerry Yang – cocreated Yahoo!
Victor Yngve – authored first string processing language, COMIT
Nobuo Yoneda – Yoneda lemma, Yoneda product, ALGOL, IFIP WG 2.1 member
Z
Matei Zaharia – created Apache Spark
Jamie Zawinski – Lucid Emacs, Netscape Navigator, Mozilla, XScreenSaver
Phil Zimmermann – created encryption software PGP, the ZRTP protocol, and Zfone
Mark Zuckerberg – created Facebook
See also
List of computer scientists
List of computing people
List of important publications in computer science
List of members of the National Academy of Sciences (computer and information sciences)
List of pioneers in computer science
List of programming language researchers
List of Russian programmers
List of video game industry people (programming)
!
Programmers
Computer Programmers |
11527 | https://en.wikipedia.org/wiki/Fundamental%20theorem%20on%20homomorphisms | Fundamental theorem on homomorphisms | In abstract algebra, the fundamental theorem on homomorphisms, also known as the fundamental homomorphism theorem, or the first isomorphism theorem, relates the structure of two objects between which a homomorphism is given, and of the kernel and image of the homomorphism.
The homomorphism theorem is used to prove the isomorphism theorems.
Group theoretic version
Given two groups G and H and a group homomorphism , let K be a normal subgroup in G and φ the natural surjective homomorphism (where G/K is the quotient group of G by K). If K is a subset of ker(f) then there exists a unique homomorphism such that .
In other words, the natural projection φ is universal among homomorphisms on G that map K to the identity element.
The situation is described by the following commutative diagram:
h is injective if and only if . Therefore, by setting we immediately get the first isomorphism theorem.
We can write the statement of the fundamental theorem on homomorphisms of groups as "every homomorphic image of a group is isomorphic to a quotient group".
Other versions
Similar theorems are valid for monoids, vector spaces, modules, and rings.
See also
Quotient category
References
.
.
.
.
Theorems in abstract algebra |
11866 | https://en.wikipedia.org/wiki/Global%20Positioning%20System | Global Positioning System | The Global Positioning System (GPS), originally Navstar GPS, is a satellite-based radionavigation system owned by the United States government and operated by the United States Space Force. It is one of the global navigation satellite systems (GNSS) that provides geolocation and time information to a GPS receiver anywhere on or near the Earth where there is an unobstructed line of sight to four or more GPS satellites. Obstacles such as mountains and buildings can block the relatively weak GPS signals.
The GPS does not require the user to transmit any data, and it operates independently of any telephonic or Internet reception, though these technologies can enhance the usefulness of the GPS positioning information. The GPS provides critical positioning capabilities to military, civil, and commercial users around the world. The United States government created the system, maintains and controls it, and makes it freely accessible to anyone with a GPS receiver.
The GPS project was started by the U.S. Department of Defense in 1973. The first prototype spacecraft was launched in 1978 and the full constellation of 24 satellites became operational in 1993. Originally limited to use by the United States military, civilian use was allowed from the 1980s following an executive order from President Ronald Reagan after the Korean Air Lines Flight 007 incident. Advances in technology and new demands on the existing system have now led to efforts to modernize the GPS and implement the next generation of GPS Block IIIA satellites and Next Generation Operational Control System (OCX). Announcements from Vice President Al Gore and the Clinton Administration in 1998 initiated these changes, which were authorized by the U.S. Congress in 2000.
During the 1990s, GPS quality was degraded by the United States government in a program called "Selective Availability"; this was discontinued on May 1, 2000, in accordance with a law signed by President Bill Clinton.
The GPS service is controlled by the United States government, which can selectively deny access to the system, as happened to the Indian military in 1999 during the Kargil War, or degrade the service at any time. As a result, several countries have developed or are in the process of setting up other global or regional satellite navigation systems. The Russian Global Navigation Satellite System (GLONASS) was developed contemporaneously with GPS, but suffered from incomplete coverage of the globe until the mid-2000s. GLONASS can be added to GPS devices, making more satellites available and enabling positions to be fixed more quickly and accurately, to within . China's BeiDou Navigation Satellite System began global services in 2018, and finished its full deployment in 2020.
There are also the European Union Galileo navigation satellite system, and India's NavIC. Japan's Quasi-Zenith Satellite System (QZSS) is a GPS satellite-based augmentation system to enhance GPS's accuracy in Asia-Oceania, with satellite navigation independent of GPS scheduled for 2023.
When selective availability was lifted in 2000, GPS had about a accuracy. GPS receivers that use the L5 band can have much higher accuracy, pinpointing to within , while high-end users (typically engineering and land surveying applications) are able to have accuracy on several of the bandwidth signals to within two centimeters, and even sub-millimeter accuracy for long-term measurements. , 16 GPS satellites are broadcasting L5 signals, and the signals are considered pre-operational, scheduled to reach 24 satellites by approximately 2027.
History
The GPS project was launched in the United States in 1973 to overcome the limitations of previous navigation systems, combining ideas from several predecessors, including classified engineering design studies from the 1960s. The U.S. Department of Defense developed the system, which originally used 24 satellites, for use by the United States military, and became fully operational in 1995. Civilian use was allowed from the 1980s. Roger L. Easton of the Naval Research Laboratory, Ivan A. Getting of The Aerospace Corporation, and Bradford Parkinson of the Applied Physics Laboratory are credited with inventing it. The work of Gladys West is credited as instrumental in the development of computational techniques for detecting satellite positions with the precision needed for GPS.
The design of GPS is based partly on similar ground-based radio-navigation systems, such as LORAN and the Decca Navigator, developed in the early 1940s.
In 1955, Friedwardt Winterberg proposed a test of general relativity—detecting time slowing in a strong gravitational field using accurate atomic clocks placed in orbit inside artificial satellites.
Special and general relativity predict that the clocks on the GPS satellites would be seen by the Earth's observers to run 38 microseconds faster per day than the clocks on the Earth. The design of GPS corrects for this difference; without doing so, GPS calculated positions would accumulate up to of error.
Predecessors
In 1955, Dutch Naval officer Wijnand Langeraar submitted a patent application for a radio-based Long-Range Navigation System, with the US Patent office on 16 Feb 1955 and was granted Patent US2980907A on 18 April 1961.
When the Soviet Union launched the first artificial satellite (Sputnik 1) in 1957, two American physicists, William Guier and George Weiffenbach, at Johns Hopkins University's Applied Physics Laboratory (APL) decided to monitor its radio transmissions. Within hours they realized that, because of the Doppler effect, they could pinpoint where the satellite was along its orbit. The Director of the APL gave them access to their UNIVAC to do the heavy calculations required.
Early the next year, Frank McClure, the deputy director of the APL, asked Guier and Weiffenbach to investigate the inverse problem—pinpointing the user's location, given the satellite's. (At the time, the Navy was developing the submarine-launched Polaris missile, which required them to know the submarine's location.) This led them and APL to develop the TRANSIT system. In 1959, ARPA (renamed DARPA in 1972) also played a role in TRANSIT.
TRANSIT was first successfully tested in 1960. It used a constellation of five satellites and could provide a navigational fix approximately once per hour.
In 1967, the U.S. Navy developed the Timation satellite, which proved the feasibility of placing accurate clocks in space, a technology required for GPS.
In the 1970s, the ground-based OMEGA navigation system, based on phase comparison of signal transmission from pairs of stations, became the first worldwide radio navigation system. Limitations of these systems drove the need for a more universal navigation solution with greater accuracy.
Although there were wide needs for accurate navigation in military and civilian sectors, almost none of those was seen as justification for the billions of dollars it would cost in research, development, deployment, and operation of a constellation of navigation satellites. During the Cold War arms race, the nuclear threat to the existence of the United States was the one need that did justify this cost in the view of the United States Congress. This deterrent effect is why GPS was funded. It is also the reason for the ultra-secrecy at that time. The nuclear triad consisted of the United States Navy's submarine-launched ballistic missiles (SLBMs) along with United States Air Force (USAF) strategic bombers and intercontinental ballistic missiles (ICBMs). Considered vital to the nuclear deterrence posture, accurate determination of the SLBM launch position was a force multiplier.
Precise navigation would enable United States ballistic missile submarines to get an accurate fix of their positions before they launched their SLBMs. The USAF, with two thirds of the nuclear triad, also had requirements for a more accurate and reliable navigation system. The U.S. Navy and U.S. Air Force were developing their own technologies in parallel to solve what was essentially the same problem.
To increase the survivability of ICBMs, there was a proposal to use mobile launch platforms (comparable to the Soviet SS-24 and SS-25) and so the need to fix the launch position had similarity to the SLBM situation.
In 1960, the Air Force proposed a radio-navigation system called MOSAIC (MObile System for Accurate ICBM Control) that was essentially a 3-D LORAN. A follow-on study, Project 57, was performed in 1963 and it was "in this study that the GPS concept was born." That same year, the concept was pursued as Project 621B, which had "many of the attributes that you now see in GPS" and promised increased accuracy for Air Force bombers as well as ICBMs.
Updates from the Navy TRANSIT system were too slow for the high speeds of Air Force operation. The Naval Research Laboratory (NRL) continued making advances with their Timation (Time Navigation) satellites, first launched in 1967, second launched in 1969, with the third in 1974 carrying the first atomic clock into orbit and the fourth launched in 1977.
Another important predecessor to GPS came from a different branch of the United States military. In 1964, the United States Army orbited its first Sequential Collation of Range (SECOR) satellite used for geodetic surveying. The SECOR system included three ground-based transmitters at known locations that would send signals to the satellite transponder in orbit. A fourth ground-based station, at an undetermined position, could then use those signals to fix its location precisely. The last SECOR satellite was launched in 1969.
Development
With these parallel developments in the 1960s, it was realized that a superior system could be developed by synthesizing the best technologies from 621B, Transit, Timation, and SECOR in a multi-service program. Satellite orbital position errors, induced by variations in the gravity field and radar refraction among others, had to be resolved. A team led by Harold L Jury of Pan Am Aerospace Division in Florida from 1970 to 1973, used real-time data assimilation and recursive estimation to do so, reducing systematic and residual errors to a manageable level to permit accurate navigation.
During Labor Day weekend in 1973, a meeting of about twelve military officers at the Pentagon discussed the creation of a Defense Navigation Satellite System (DNSS). It was at this meeting that the real synthesis that became GPS was created. Later that year, the DNSS program was named Navstar. Navstar is often erroneously considered an acronym for "NAVigation System Using Timing and Ranging" but was never considered as such by the GPS Joint Program Office (TRW may have once advocated for a different navigational system that used that acronym). With the individual satellites being associated with the name Navstar (as with the predecessors Transit and Timation), a more fully encompassing name was used to identify the constellation of Navstar satellites, Navstar-GPS. Ten "Block I" prototype satellites were launched between 1978 and 1985 (an additional unit was destroyed in a launch failure).
The effect of the ionosphere on radio transmission was investigated in a geophysics laboratory of Air Force Cambridge Research Laboratory, renamed to Air Force Geophysical Research Lab (AFGRL) in 1974. AFGRL developed the Klobuchar model for computing ionospheric corrections to GPS location. Of note is work done by Australian space scientist Elizabeth Essex-Cohen at AFGRL in 1974. She was concerned with the curving of the paths of radio waves (atmospheric refraction) traversing the ionosphere from NavSTAR satellites.
After Korean Air Lines Flight 007, a Boeing 747 carrying 269 people, was shot down in 1983 after straying into the USSR's prohibited airspace, in the vicinity of Sakhalin and Moneron Islands, President Ronald Reagan issued a directive making GPS freely available for civilian use, once it was sufficiently developed, as a common good. The first Block II satellite was launched on February 14, 1989, and the 24th satellite was launched in 1994. The GPS program cost at this point, not including the cost of the user equipment but including the costs of the satellite launches, has been estimated at US$5 billion (equivalent to $ billion in ).
Initially, the highest-quality signal was reserved for military use, and the signal available for civilian use was intentionally degraded, in a policy known as Selective Availability. This changed with President Bill Clinton signing on May 1, 2000, a policy directive to turn off Selective Availability to provide the same accuracy to civilians that was afforded to the military. The directive was proposed by the U.S. Secretary of Defense, William Perry, in view of the widespread growth of differential GPS services by private industry to improve civilian accuracy. Moreover, the U.S. military was actively developing technologies to deny GPS service to potential adversaries on a regional basis.
Since its deployment, the U.S. has implemented several improvements to the GPS service, including new signals for civil use and increased accuracy and integrity for all users, all the while maintaining compatibility with existing GPS equipment. Modernization of the satellite system has been an ongoing initiative by the U.S. Department of Defense through a series of satellite acquisitions to meet the growing needs of the military, civilians, and the commercial market.
As of early 2015, high-quality, FAA grade, Standard Positioning Service (SPS) GPS receivers provided horizontal accuracy of better than , although many factors such as receiver and antenna quality and atmospheric issues can affect this accuracy.
GPS is owned and operated by the United States government as a national resource. The Department of Defense is the steward of GPS. The Interagency GPS Executive Board (IGEB) oversaw GPS policy matters from 1996 to 2004. After that, the National Space-Based Positioning, Navigation and Timing Executive Committee was established by presidential directive in 2004 to advise and coordinate federal departments and agencies on matters concerning the GPS and related systems. The executive committee is chaired jointly by the Deputy Secretaries of Defense and Transportation. Its membership includes equivalent-level officials from the Departments of State, Commerce, and Homeland Security, the Joint Chiefs of Staff and NASA. Components of the executive office of the president participate as observers to the executive committee, and the FCC chairman participates as a liaison.
The U.S. Department of Defense is required by law to "maintain a Standard Positioning Service (as defined in the federal radio navigation plan and the standard positioning service signal specification) that will be available on a continuous, worldwide basis," and "develop measures to prevent hostile use of GPS and its augmentations without unduly disrupting or degrading civilian uses."
Timeline and modernization
In 1972, the USAF Central Inertial Guidance Test Facility (Holloman AFB) conducted developmental flight tests of four prototype GPS receivers in a Y configuration over White Sands Missile Range, using ground-based pseudo-satellites.
In 1978, the first experimental Block-I GPS satellite was launched.
In 1983, after Soviet interceptor aircraft shot down the civilian airliner KAL 007 that strayed into prohibited airspace because of navigational errors, killing all 269 people on board, U.S. President Ronald Reagan announced that GPS would be made available for civilian uses once it was completed, although it had been previously published [in Navigation magazine], and that the CA code (Coarse/Acquisition code) would be available to civilian users.
By 1985, ten more experimental Block-I satellites had been launched to validate the concept.
Beginning in 1988, command and control of these satellites was moved from Onizuka AFS, California to the 2nd Satellite Control Squadron (2SCS) located at Falcon Air Force Station in Colorado Springs, Colorado.
On February 14, 1989, the first modern Block-II satellite was launched.
The Gulf War from 1990 to 1991 was the first conflict in which the military widely used GPS.
In 1991, a project to create a miniature GPS receiver successfully ended, replacing the previous military receivers with a handheld receiver.
In 1992, the 2nd Space Wing, which originally managed the system, was inactivated and replaced by the 50th Space Wing.
By December 1993, GPS achieved initial operational capability (IOC), with a full constellation (24 satellites) available and providing the Standard Positioning Service (SPS).
Full Operational Capability (FOC) was declared by Air Force Space Command (AFSPC) in April 1995, signifying full availability of the military's secure Precise Positioning Service (PPS).
In 1996, recognizing the importance of GPS to civilian users as well as military users, U.S. President Bill Clinton issued a policy directive declaring GPS a dual-use system and establishing an Interagency GPS Executive Board to manage it as a national asset.
In 1998, United States Vice President Al Gore announced plans to upgrade GPS with two new civilian signals for enhanced user accuracy and reliability, particularly with respect to aviation safety, and in 2000 the United States Congress authorized the effort, referring to it as GPS III.
On May 2, 2000 "Selective Availability" was discontinued as a result of the 1996 executive order, allowing civilian users to receive a non-degraded signal globally.
In 2004, the United States government signed an agreement with the European Community establishing cooperation related to GPS and Europe's Galileo system.
In 2004, United States President George W. Bush updated the national policy and replaced the executive board with the National Executive Committee for Space-Based Positioning, Navigation, and Timing.
November 2004, Qualcomm announced successful tests of assisted GPS for mobile phones.
In 2005, the first modernized GPS satellite was launched and began transmitting a second civilian signal (L2C) for enhanced user performance.
On September 14, 2007, the aging mainframe-based Ground Segment Control System was transferred to the new Architecture Evolution Plan.
On May 19, 2009, the United States Government Accountability Office issued a report warning that some GPS satellites could fail as soon as 2010.
On May 21, 2009, the Air Force Space Command allayed fears of GPS failure, saying "There's only a small risk we will not continue to exceed our performance standard."
On January 11, 2010, an update of ground control systems caused a software incompatibility with 8,000 to 10,000 military receivers manufactured by a division of Trimble Navigation Limited of Sunnyvale, Calif.
On February 25, 2010, the U.S. Air Force awarded the contract to develop the GPS Next Generation Operational Control System (OCX) to improve accuracy and availability of GPS navigation signals, and serve as a critical part of GPS modernization.
Awards
On February 10, 1993, the National Aeronautic Association selected the GPS Team as winners of the 1992 Robert J. Collier Trophy, the US's most prestigious aviation award. This team combines researchers from the Naval Research Laboratory, the USAF, the Aerospace Corporation, Rockwell International Corporation, and IBM Federal Systems Company. The citation honors them "for the most significant development for safe and efficient navigation and surveillance of air and spacecraft since the introduction of radio navigation 50 years ago."
Two GPS developers received the National Academy of Engineering Charles Stark Draper Prize for 2003:
Ivan Getting, emeritus president of The Aerospace Corporation and an engineer at MIT, established the basis for GPS, improving on the World War II land-based radio system called LORAN (Long-range Radio Aid to Navigation).
Bradford Parkinson, professor of aeronautics and astronautics at Stanford University, conceived the present satellite-based system in the early 1960s and developed it in conjunction with the U.S. Air Force. Parkinson served twenty-one years in the Air Force, from 1957 to 1978, and retired with the rank of colonel.
GPS developer Roger L. Easton received the National Medal of Technology on February 13, 2006.
Francis X. Kane (Col. USAF, ret.) was inducted into the U.S. Air Force Space and Missile Pioneers Hall of Fame at Lackland A.F.B., San Antonio, Texas, March 2, 2010, for his role in space technology development and the engineering design concept of GPS conducted as part of Project 621B.
In 1998, GPS technology was inducted into the Space Foundation Space Technology Hall of Fame.
On October 4, 2011, the International Astronautical Federation (IAF) awarded the Global Positioning System (GPS) its 60th Anniversary Award, nominated by IAF member, the American Institute for Aeronautics and Astronautics (AIAA). The IAF Honors and Awards Committee recognized the uniqueness of the GPS program and the exemplary role it has played in building international collaboration for the benefit of humanity.
On December 6, 2018, Gladys West was inducted into the Air Force Space and Missile Pioneers Hall of Fame in recognition of her work on an extremely accurate geodetic Earth model, which was ultimately used to determine the orbit of the GPS constellation.
On February 12, 2019, four founding members of the project were awarded the Queen Elizabeth Prize for Engineering with the chair of the awarding board stating "Engineering is the foundation of civilisation; there is no other foundation; it makes things happen. And that's exactly what today's Laureates have done - they've made things happen. They've re-written, in a major way, the infrastructure of our world."
Basic concept
Fundamentals
The GPS receiver calculates its own four-dimensional position in spacetime based on data received from multiple GPS satellites. Each satellite carries an accurate record of its position and time, and transmits that data to the receiver.
The satellites carry very stable atomic clocks that are synchronized with one another and with ground clocks. Any drift from time maintained on the ground is corrected daily. In the same manner, the satellite locations are known with great precision. GPS receivers have clocks as well, but they are less stable and less precise.
Since the speed of radio waves is constant and independent of the satellite speed, the time delay between when the satellite transmits a signal and the receiver receives it is proportional to the distance from the satellite to the receiver. At a minimum, four satellites must be in view of the receiver for it to compute four unknown quantities (three position coordinates and the deviation of its own clock from satellite time).
More detailed description
Each GPS satellite continually broadcasts a signal (carrier wave with modulation) that includes:
A pseudorandom code (sequence of ones and zeros) that is known to the receiver. By time-aligning a receiver-generated version and the receiver-measured version of the code, the time of arrival (TOA) of a defined point in the code sequence, called an epoch, can be found in the receiver clock time scale
A message that includes the time of transmission (TOT) of the code epoch (in GPS time scale) and the satellite position at that time
Conceptually, the receiver measures the TOAs (according to its own clock) of four satellite signals. From the TOAs and the TOTs, the receiver forms four time of flight (TOF) values, which are (given the speed of light) approximately equivalent to receiver-satellite ranges plus time difference between the receiver and GPS satellites multiplied by speed of light, which are called pseudo-ranges. The receiver then computes its three-dimensional position and clock deviation from the four TOFs.
In practice the receiver position (in three dimensional Cartesian coordinates with origin at the Earth's center) and the offset of the receiver clock relative to the GPS time are computed simultaneously, using the navigation equations to process the TOFs.
The receiver's Earth-centered solution location is usually converted to latitude, longitude and height relative to an ellipsoidal Earth model. The height may then be further converted to height relative to the geoid, which is essentially mean sea level. These coordinates may be displayed, such as on a moving map display, or recorded or used by some other system, such as a vehicle guidance system.
User-satellite geometry
Although usually not formed explicitly in the receiver processing, the conceptual time differences of arrival (TDOAs) define the measurement geometry. Each TDOA corresponds to a hyperboloid of revolution (see Multilateration). The line connecting the two satellites involved (and its extensions) forms the axis of the hyperboloid. The receiver is located at the point where three hyperboloids intersect.
It is sometimes incorrectly said that the user location is at the intersection of three spheres. While simpler to visualize, this is the case only if the receiver has a clock synchronized with the satellite clocks (i.e., the receiver measures true ranges to the satellites rather than range differences). There are marked performance benefits to the user carrying a clock synchronized with the satellites. Foremost is that only three satellites are needed to compute a position solution. If it were an essential part of the GPS concept that all users needed to carry a synchronized clock, a smaller number of satellites could be deployed, but the cost and complexity of the user equipment would increase.
Receiver in continuous operation
The description above is representative of a receiver start-up situation. Most receivers have a track algorithm, sometimes called a tracker, that combines sets of satellite measurements collected at different times—in effect, taking advantage of the fact that successive receiver positions are usually close to each other. After a set of measurements are processed, the tracker predicts the receiver location corresponding to the next set of satellite measurements. When the new measurements are collected, the receiver uses a weighting scheme to combine the new measurements with the tracker prediction. In general, a tracker can (a) improve receiver position and time accuracy, (b) reject bad measurements, and (c) estimate receiver speed and direction.
The disadvantage of a tracker is that changes in speed or direction can be computed only with a delay, and that derived direction becomes inaccurate when the distance traveled between two position measurements drops below or near the random error of position measurement. GPS units can use measurements of the Doppler shift of the signals received to compute velocity accurately. More advanced navigation systems use additional sensors like a compass or an inertial navigation system to complement GPS.
Non-navigation applications
GPS requires four or more satellites to be visible for accurate navigation. The solution of the navigation equations gives the position of the receiver along with the difference between the time kept by the receiver's on-board clock and the true time-of-day, thereby eliminating the need for a more precise and possibly impractical receiver based clock. Applications for GPS such as time transfer, traffic signal timing, and synchronization of cell phone base stations, make use of this cheap and highly accurate timing. Some GPS applications use this time for display, or, other than for the basic position calculations, do not use it at all.
Although four satellites are required for normal operation, fewer apply in special cases. If one variable is already known, a receiver can determine its position using only three satellites. For example, a ship on the open ocean usually has a known elevation close to 0m, and the elevation of an aircraft may be known. Some GPS receivers may use additional clues or assumptions such as reusing the last known altitude, dead reckoning, inertial navigation, or including information from the vehicle computer, to give a (possibly degraded) position when fewer than four satellites are visible.
Structure
The current GPS consists of three major segments. These are the space segment, a control segment, and a user segment. The U.S. Space Force develops, maintains, and operates the space and control segments. GPS satellites broadcast signals from space, and each GPS receiver uses these signals to calculate its three-dimensional location (latitude, longitude, and altitude) and the current time.
Space segment
The space segment (SS) is composed of 24 to 32 satellites, or Space Vehicles (SV), in medium Earth orbit, and also includes the payload adapters to the boosters required to launch them into orbit. The GPS design originally called for 24 SVs, eight each in three approximately circular orbits, but this was modified to six orbital planes with four satellites each. The six orbit planes have approximately 55° inclination (tilt relative to the Earth's equator) and are separated by 60° right ascension of the ascending node (angle along the equator from a reference point to the orbit's intersection). The orbital period is one-half a sidereal day, i.e., 11 hours and 58 minutes so that the satellites pass over the same locations or almost the same locations every day. The orbits are arranged so that at least six satellites are always within line of sight from everywhere on the Earth's surface (see animation at right). The result of this objective is that the four satellites are not evenly spaced (90°) apart within each orbit. In general terms, the angular difference between satellites in each orbit is 30°, 105°, 120°, and 105° apart, which sum to 360°.
Orbiting at an altitude of approximately ; orbital radius of approximately , each SV makes two complete orbits each sidereal day, repeating the same ground track each day. This was very helpful during development because even with only four satellites, correct alignment means all four are visible from one spot for a few hours each day. For military operations, the ground track repeat can be used to ensure good coverage in combat zones.
, there are 31 satellites in the GPS constellation, 27 of which are in use at a given time with the rest allocated as stand-bys. A 32nd was launched in 2018, but as of July 2019 is still in evaluation. More decommissioned satellites are in orbit and available as spares. The additional satellites improve the precision of GPS receiver calculations by providing redundant measurements. With the increased number of satellites, the constellation was changed to a nonuniform arrangement. Such an arrangement was shown to improve accuracy but also improves reliability and availability of the system, relative to a uniform system, when multiple satellites fail. With the expanded constellation, nine satellites are usually visible at any time from any point on the Earth with a clear horizon, ensuring considerable redundancy over the minimum four satellites needed for a position.
Control segment
The control segment (CS) is composed of:
a master control station (MCS),
an alternative master control station,
four dedicated ground antennas, and
six dedicated monitor stations.
The MCS can also access Satellite Control Network (SCN) ground antennas (for additional command and control capability) and NGA (National Geospatial-Intelligence Agency) monitor stations. The flight paths of the satellites are tracked by dedicated U.S. Space Force monitoring stations in Hawaii, Kwajalein Atoll, Ascension Island, Diego Garcia, Colorado Springs, Colorado and Cape Canaveral, along with shared NGA monitor stations operated in England, Argentina, Ecuador, Bahrain, Australia and Washington DC. The tracking information is sent to the MCS at Schriever Space Force Base ESE of Colorado Springs, which is operated by the 2nd Space Operations Squadron (2 SOPS) of the U.S. Space Force. Then 2 SOPS contacts each GPS satellite regularly with a navigational update using dedicated or shared (AFSCN) ground antennas (GPS dedicated ground antennas are located at Kwajalein, Ascension Island, Diego Garcia, and Cape Canaveral). These updates synchronize the atomic clocks on board the satellites to within a few nanoseconds of each other, and adjust the ephemeris of each satellite's internal orbital model. The updates are created by a Kalman filter that uses inputs from the ground monitoring stations, space weather information, and various other inputs.
Satellite maneuvers are not precise by GPS standards—so to change a satellite's orbit, the satellite must be marked unhealthy, so receivers don't use it. After the satellite maneuver, engineers track the new orbit from the ground, upload the new ephemeris, and mark the satellite healthy again.
The operation control segment (OCS) currently serves as the control segment of record. It provides the operational capability that supports GPS users and keeps the GPS operational and performing within specification.
OCS successfully replaced the legacy 1970s-era mainframe computer at Schriever Air Force Base in September 2007. After installation, the system helped enable upgrades and provide a foundation for a new security architecture that supported U.S. armed forces.
OCS will continue to be the ground control system of record until the new segment, Next Generation GPS Operation Control System (OCX), is fully developed and functional. The new capabilities provided by OCX will be the cornerstone for revolutionizing GPS's mission capabilities, enabling U.S. Space Force to greatly enhance GPS operational services to U.S. combat forces, civil partners and myriad domestic and international users. The GPS OCX program also will reduce cost, schedule and technical risk. It is designed to provide 50% sustainment cost savings through efficient software architecture and Performance-Based Logistics. In addition, GPS OCX is expected to cost millions less than the cost to upgrade OCS while providing four times the capability.
The GPS OCX program represents a critical part of GPS modernization and provides significant information assurance improvements over the current GPS OCS program.
OCX will have the ability to control and manage GPS legacy satellites as well as the next generation of GPS III satellites, while enabling the full array of military signals.
Built on a flexible architecture that can rapidly adapt to the changing needs of today's and future GPS users allowing immediate access to GPS data and constellation status through secure, accurate and reliable information.
Provides the warfighter with more secure, actionable and predictive information to enhance situational awareness.
Enables new modernized signals (L1C, L2C, and L5) and has M-code capability, which the legacy system is unable to do.
Provides significant information assurance improvements over the current program including detecting and preventing cyber attacks, while isolating, containing and operating during such attacks.
Supports higher volume near real-time command and control capabilities and abilities.
On September 14, 2011, the U.S. Air Force announced the completion of GPS OCX Preliminary Design Review and confirmed that the OCX program is ready for the next phase of development.
The GPS OCX program has missed major milestones and is pushing its launch into 2021, 5 years past the original deadline. According to the Government Accounting Office, even this new deadline looks shaky.
User segment
The user segment (US) is composed of hundreds of thousands of U.S. and allied military users of the secure GPS Precise Positioning Service, and tens of millions of civil, commercial and scientific users of the Standard Positioning Service. In general, GPS receivers are composed of an antenna, tuned to the frequencies transmitted by the satellites, receiver-processors, and a highly stable clock (often a crystal oscillator). They may also include a display for providing location and speed information to the user. A receiver is often described by its number of channels: this signifies how many satellites it can monitor simultaneously. Originally limited to four or five, this has progressively increased over the years so that, , receivers typically have between 12 and 20 channels. Though there are many receiver manufacturers, they almost all use one of the chipsets produced for this purpose.
GPS receivers may include an input for differential corrections, using the RTCM SC-104 format. This is typically in the form of an RS-232 port at 4,800 bit/s speed. Data is actually sent at a much lower rate, which limits the accuracy of the signal sent using RTCM. Receivers with internal DGPS receivers can outperform those using external RTCM data. , even low-cost units commonly include Wide Area Augmentation System (WAAS) receivers.
Many GPS receivers can relay position data to a PC or other device using the NMEA 0183 protocol. Although this protocol is officially defined by the National Marine Electronics Association (NMEA), references to this protocol have been compiled from public records, allowing open source tools like gpsd to read the protocol without violating intellectual property laws. Other proprietary protocols exist as well, such as the SiRF and MTK protocols. Receivers can interface with other devices using methods including a serial connection, USB, or Bluetooth.
Applications
While originally a military project, GPS is considered a dual-use technology, meaning it has significant civilian applications as well.
GPS has become a widely deployed and useful tool for commerce, scientific uses, tracking, and surveillance. GPS's accurate time facilitates everyday activities such as banking, mobile phone operations, and even the control of power grids by allowing well synchronized hand-off switching.
Civilian
Many civilian applications use one or more of GPS's three basic components: absolute location, relative movement, and time transfer.
Amateur radio: clock synchronization required for several digital modes such as FT8, FT4 and JS8; also used with APRS for position reporting; is often critical during emergency and disaster communications support.
Atmosphere: studying the troposphere delays (recovery of the water vapor content) and ionosphere delays (recovery of the number of free electrons). Recovery of Earth surface displacements due to the atmospheric pressure loading.
Astronomy: both positional and clock synchronization data is used in astrometry and celestial mechanics and precise orbit determination. GPS is also used in both amateur astronomy with small telescopes as well as by professional observatories for finding extrasolar planets.
Automated vehicle: applying location and routes for cars and trucks to function without a human driver.
Cartography: both civilian and military cartographers use GPS extensively.
Cellular telephony: clock synchronization enables time transfer, which is critical for synchronizing its spreading codes with other base stations to facilitate inter-cell handoff and support hybrid GPS/cellular position detection for mobile emergency calls and other applications. The first handsets with integrated GPS launched in the late 1990s. The U.S. Federal Communications Commission (FCC) mandated the feature in either the handset or in the towers (for use in triangulation) in 2002 so emergency services could locate 911 callers. Third-party software developers later gained access to GPS APIs from Nextel upon launch, followed by Sprint in 2006, and Verizon soon thereafter.
Clock synchronization: the accuracy of GPS time signals (±10 ns) is second only to the atomic clocks they are based on, and is used in applications such as GPS disciplined oscillators.
Disaster relief/emergency services: many emergency services depend upon GPS for location and timing capabilities.
GPS-equipped radiosondes and dropsondes: measure and calculate the atmospheric pressure, wind speed and direction up to from the Earth's surface.
Radio occultation for weather and atmospheric science applications.
Fleet tracking: used to identify, locate and maintain contact reports with one or more fleet vehicles in real-time.
Geodesy: determination of Earth orientation parameters including the daily and sub-daily polar motion, and length-of-day variabilities, Earth's center-of-mass - geocenter motion, and low-degree gravity field parameters.
Geofencing: vehicle tracking systems, person tracking systems, and pet tracking systems use GPS to locate devices that are attached to or carried by a person, vehicle, or pet. The application can provide continuous tracking and send notifications if the target leaves a designated (or "fenced-in") area.
Geotagging: applies location coordinates to digital objects such as photographs (in Exif data) and other documents for purposes such as creating map overlays with devices like Nikon GP-1
GPS aircraft tracking
GPS for mining: the use of RTK GPS has significantly improved several mining operations such as drilling, shoveling, vehicle tracking, and surveying. RTK GPS provides centimeter-level positioning accuracy.
GPS data mining: It is possible to aggregate GPS data from multiple users to understand movement patterns, common trajectories and interesting locations.
GPS tours: location determines what content to display; for instance, information about an approaching point of interest.
Navigation: navigators value digitally precise velocity and orientation measurements, as well as precise positions in real-time with a support of orbit and clock corrections.
Orbit determination of low-orbiting satellites with GPS receiver installed on board, such as GOCE, GRACE, Jason-1, Jason-2, TerraSAR-X, TanDEM-X, CHAMP, Sentinel-3, and some cubesats, e.g., CubETH.
Phasor measurements: GPS enables highly accurate timestamping of power system measurements, making it possible to compute phasors.
Recreation: for example, Geocaching, Geodashing, GPS drawing, waymarking, and other kinds of location based mobile games such as Pokémon Go.
Reference frames: realization and densification of the terrestrial reference frames in the framework of Global Geodetic Observing System. Co-location in space between Satellite laser ranging and microwave observations for deriving global geodetic parameters.
Robotics: self-navigating, autonomous robots using GPS sensors, which calculate latitude, longitude, time, speed, and heading.
Sport: used in football and rugby for the control and analysis of the training load.
Surveying: surveyors use absolute locations to make maps and determine property boundaries.
Tectonics: GPS enables direct fault motion measurement of earthquakes. Between earthquakes GPS can be used to measure crustal motion and deformation to estimate seismic strain buildup for creating seismic hazard maps.
Telematics: GPS technology integrated with computers and mobile communications technology in automotive navigation systems.
Restrictions on civilian use
The U.S. government controls the export of some civilian receivers. All GPS receivers capable of functioning above above sea level and , or designed or modified for use with unmanned missiles and aircraft, are classified as munitions (weapons)—which means they require State Department export licenses.
This rule applies even to otherwise purely civilian units that only receive the L1 frequency and the C/A (Coarse/Acquisition) code.
Disabling operation above these limits exempts the receiver from classification as a munition. Vendor interpretations differ. The rule refers to operation at both the target altitude and speed, but some receivers stop operating even when stationary. This has caused problems with some amateur radio balloon launches that regularly reach .
These limits only apply to units or components exported from the United States. A growing trade in various components exists, including GPS units from other countries. These are expressly sold as ITAR-free.
Military
As of 2009, military GPS applications include:
Navigation: Soldiers use GPS to find objectives, even in the dark or in unfamiliar territory, and to coordinate troop and supply movement. In the United States armed forces, commanders use the Commander's Digital Assistant and lower ranks use the Soldier Digital Assistant.
Target tracking: Various military weapons systems use GPS to track potential ground and air targets before flagging them as hostile. These weapon systems pass target coordinates to precision-guided munitions to allow them to engage targets accurately. Military aircraft, particularly in air-to-ground roles, use GPS to find targets.
Missile and projectile guidance: GPS allows accurate targeting of various military weapons including ICBMs, cruise missiles, precision-guided munitions and artillery shells. Embedded GPS receivers able to withstand accelerations of 12,000 g or about have been developed for use in howitzer shells.
Search and rescue.
Reconnaissance: Patrol movement can be managed more closely.
GPS satellites carry a set of nuclear detonation detectors consisting of an optical sensor called a bhangmeter, an X-ray sensor, a dosimeter, and an electromagnetic pulse (EMP) sensor (W-sensor), that form a major portion of the United States Nuclear Detonation Detection System. General William Shelton has stated that future satellites may drop this feature to save money.
GPS type navigation was first used in war in the 1991 Persian Gulf War, before GPS was fully developed in 1995, to assist Coalition Forces to navigate and perform maneuvers in the war. The war also demonstrated the vulnerability of GPS to being jammed, when Iraqi forces installed jamming devices on likely targets that emitted radio noise, disrupting reception of the weak GPS signal.
GPS's vulnerability to jamming is a threat that continues to grow as jamming equipment and experience grows. GPS signals have been reported to have been jammed many times over the years for military purposes. Russia seems to have several objectives for this behavior, such as intimidating neighbors while undermining confidence in their reliance on American systems, promoting their GLONASS alternative, disrupting Western military exercises, and protecting assets from drones. China uses jamming to discourage US surveillance aircraft near the contested Spratly Islands. North Korea has mounted several major jamming operations near its border with South Korea and offshore, disrupting flights, shipping and fishing operations. Iranian Armed Forces disrupted the civilian airliner plane Flight PS752's GPS when it shot down the aircraft.
Timekeeping
Leap seconds
While most clocks derive their time from Coordinated Universal Time (UTC), the atomic clocks on the satellites are set to "GPS time". The difference is that GPS time is not corrected to match the rotation of the Earth, so it does not contain leap seconds or other corrections that are periodically added to UTC. GPS time was set to match UTC in 1980, but has since diverged. The lack of corrections means that GPS time remains at a constant offset with International Atomic Time (TAI) (TAI - GPS = 19 seconds). Periodic corrections are performed to the on-board clocks to keep them synchronized with ground clocks.
The GPS navigation message includes the difference between GPS time and UTC. GPS time is 18 seconds ahead of UTC because of the leap second added to UTC on December 31, 2016. Receivers subtract this offset from GPS time to calculate UTC and specific time zone values. New GPS units may not show the correct UTC time until after receiving the UTC offset message. The GPS-UTC offset field can accommodate 255 leap seconds (eight bits).
Accuracy
GPS time is theoretically accurate to about 14 nanoseconds, due to the clock drift relative to International Atomic Time that the atomic clocks in GPS transmitters experience Most receivers lose some accuracy in their interpretation of the signals and are only accurate to about 100 nanoseconds.
Format
As opposed to the year, month, and day format of the Gregorian calendar, the GPS date is expressed as a week number and a seconds-into-week number. The week number is transmitted as a ten-bit field in the C/A and P(Y) navigation messages, and so it becomes zero again every 1,024 weeks (19.6 years). GPS week zero started at 00:00:00 UTC (00:00:19 TAI) on January 6, 1980, and the week number became zero again for the first time at 23:59:47 UTC on August 21, 1999 (00:00:19 TAI on August 22, 1999). It happened the second time at 23:59:42 UTC on April 6, 2019. To determine the current Gregorian date, a GPS receiver must be provided with the approximate date (to within 3,584 days) to correctly translate the GPS date signal. To address this concern in the future the modernized GPS civil navigation (CNAV) message will use a 13-bit field that only repeats every 8,192 weeks (157 years), thus lasting until 2137 (157 years after GPS week zero).
Communication
The navigational signals transmitted by GPS satellites encode a variety of information including satellite positions, the state of the internal clocks, and the health of the network. These signals are transmitted on two separate carrier frequencies that are common to all satellites in the network. Two different encodings are used: a public encoding that enables lower resolution navigation, and an encrypted encoding used by the U.S. military.
Message format
{|class="wikitable" style="float:right; margin:0 0 0.5em 1em;" border="1"
|+
! Subframes !! Description
|-
| 1 || Satellite clock,GPS time relationship
|-
| 2–3 || Ephemeris(precise satellite orbit)
|-
| 4–5 || Almanac component(satellite network synopsis,error correction)
|}
Each GPS satellite continuously broadcasts a navigation message on L1 (C/A and P/Y) and L2 (P/Y) frequencies at a rate of 50 bits per second (see bitrate). Each complete message takes 750 seconds ( minutes) to complete. The message structure has a basic format of a 1500-bit-long frame made up of five subframes, each subframe being 300 bits (6 seconds) long. Subframes 4 and 5 are subcommutated 25 times each, so that a complete data message requires the transmission of 25 full frames. Each subframe consists of ten words, each 30 bits long. Thus, with 300 bits in a subframe times 5 subframes in a frame times 25 frames in a message, each message is 37,500 bits long. At a transmission rate of 50-bit/s, this gives 750 seconds to transmit an entire almanac message (GPS). Each 30-second frame begins precisely on the minute or half-minute as indicated by the atomic clock on each satellite.
The first subframe of each frame encodes the week number and the time within the week, as well as the data about the health of the satellite. The second and the third subframes contain the ephemeris – the precise orbit for the satellite. The fourth and fifth subframes contain the almanac, which contains coarse orbit and status information for up to 32 satellites in the constellation as well as data related to error correction. Thus, to obtain an accurate satellite location from this transmitted message, the receiver must demodulate the message from each satellite it includes in its solution for 18 to 30 seconds. To collect all transmitted almanacs, the receiver must demodulate the message for 732 to 750 seconds or minutes.
All satellites broadcast at the same frequencies, encoding signals using unique code-division multiple access (CDMA) so receivers can distinguish individual satellites from each other. The system uses two distinct CDMA encoding types: the coarse/acquisition (C/A) code, which is accessible by the general public, and the precise (P(Y)) code, which is encrypted so that only the U.S. military and other NATO nations who have been given access to the encryption code can access it.
The ephemeris is updated every 2 hours and is sufficiently stable for 4 hours, with provisions for updates every 6 hours or longer in non-nominal conditions. The almanac is updated typically every 24 hours. Additionally, data for a few weeks following is uploaded in case of transmission updates that delay data upload.
Satellite frequencies
{|class="wikitable" style="float:right; width:30em; margin:0 0 0.5em 1em;" border="1"
|+
! Band !! Frequency !! Description
|-
| L1 || 1575.42 MHz || Coarse-acquisition (C/A) and encrypted precision (P(Y)) codes, plus the L1 civilian (L1C) and military (M) codes on Block III and newer satellites.
|-
| L2 || 1227.60 MHz || P(Y) code, plus the L2C and military codes on the Block IIR-M and newer satellites.
|-
| L3 || 1381.05 MHz || Used for nuclear detonation (NUDET) detection.
|-
| L4 || 1379.913 MHz || Being studied for additional ionospheric correction.
|-
| L5 || 1176.45 MHz || Used as a civilian safety-of-life (SoL) signal on Block IIF and newer satellites.
|}
All satellites broadcast at the same two frequencies, 1.57542 GHz (L1 signal) and 1.2276 GHz (L2 signal). The satellite network uses a CDMA spread-spectrum technique where the low-bitrate message data is encoded with a high-rate pseudo-random (PRN) sequence that is different for each satellite. The receiver must be aware of the PRN codes for each satellite to reconstruct the actual message data. The C/A code, for civilian use, transmits data at 1.023 million chips per second, whereas the P code, for U.S. military use, transmits at 10.23 million chips per second. The actual internal reference of the satellites is 10.22999999543 MHz to compensate for relativistic effects that make observers on the Earth perceive a different time reference with respect to the transmitters in orbit. The L1 carrier is modulated by both the C/A and P codes, while the L2 carrier is only modulated by the P code. The P code can be encrypted as a so-called P(Y) code that is only available to military equipment with a proper decryption key. Both the C/A and P(Y) codes impart the precise time-of-day to the user.
The L3 signal at a frequency of 1.38105 GHz is used to transmit data from the satellites to ground stations. This data is used by the United States Nuclear Detonation (NUDET) Detection System (USNDS) to detect, locate, and report nuclear detonations (NUDETs) in the Earth's atmosphere and near space. One usage is the enforcement of nuclear test ban treaties.
The L4 band at 1.379913 GHz is being studied for additional ionospheric correction.
The L5 frequency band at 1.17645 GHz was added in the process of GPS modernization. This frequency falls into an internationally protected range for aeronautical navigation, promising little or no interference under all circumstances. The first Block IIF satellite that provides this signal was launched in May 2010. On February 5th 2016, the 12th and final Block IIF satellite was launched. The L5 consists of two carrier components that are in phase quadrature with each other. Each carrier component is bi-phase shift key (BPSK) modulated by a separate bit train. "L5, the third civil GPS signal, will eventually support safety-of-life applications for aviation and provide improved availability and accuracy."
In 2011, a conditional waiver was granted to LightSquared to operate a terrestrial broadband service near the L1 band. Although LightSquared had applied for a license to operate in the 1525 to 1559 band as early as 2003 and it was put out for public comment, the FCC asked LightSquared to form a study group with the GPS community to test GPS receivers and identify issue that might arise due to the larger signal power from the LightSquared terrestrial network. The GPS community had not objected to the LightSquared (formerly MSV and SkyTerra) applications until November 2010, when LightSquared applied for a modification to its Ancillary Terrestrial Component (ATC) authorization. This filing (SAT-MOD-20101118-00239) amounted to a request to run several orders of magnitude more power in the same frequency band for terrestrial base stations, essentially repurposing what was supposed to be a "quiet neighborhood" for signals from space as the equivalent of a cellular network. Testing in the first half of 2011 has demonstrated that the impact of the lower 10 MHz of spectrum is minimal to GPS devices (less than 1% of the total GPS devices are affected). The upper 10 MHz intended for use by LightSquared may have some impact on GPS devices. There is some concern that this may seriously degrade the GPS signal for many consumer uses. Aviation Week magazine reports that the latest testing (June 2011) confirms "significant jamming" of GPS by LightSquared's system.
Demodulation and decoding
Because all of the satellite signals are modulated onto the same L1 carrier frequency, the signals must be separated after demodulation. This is done by assigning each satellite a unique binary sequence known as a Gold code. The signals are decoded after demodulation using addition of the Gold codes corresponding to the satellites monitored by the receiver.
If the almanac information has previously been acquired, the receiver picks the satellites to listen for by their PRNs, unique numbers in the range 1 through 32. If the almanac information is not in memory, the receiver enters a search mode until a lock is obtained on one of the satellites. To obtain a lock, it is necessary that there be an unobstructed line of sight from the receiver to the satellite. The receiver can then acquire the almanac and determine the satellites it should listen for. As it detects each satellite's signal, it identifies it by its distinct C/A code pattern. There can be a delay of up to 30 seconds before the first estimate of position because of the need to read the ephemeris data.
Processing of the navigation message enables the determination of the time of transmission and the satellite position at this time. For more information see Demodulation and Decoding, Advanced.
Navigation equations
Problem description
The receiver uses messages received from satellites to determine the satellite positions and time sent. The x, y, and z components of satellite position and the time sent (s) are designated as [xi, yi, zi, si] where the subscript i denotes the satellite and has the value 1, 2, ..., n, where n ≥ 4. When the time of message reception indicated by the on-board receiver clock is t̃i, the true reception time is , where b is the receiver's clock bias from the much more accurate GPS clocks employed by the satellites. The receiver clock bias is the same for all received satellite signals (assuming the satellite clocks are all perfectly synchronized). The message's transit time is , where si is the satellite time. Assuming the message traveled at the speed of light, c, the distance traveled is .
For n satellites, the equations to satisfy are:
where di is the geometric distance or range between receiver and satellite i (the values without subscripts are the x, y, and z components of receiver position):
Defining pseudoranges as , we see they are biased versions of the true range:
.
Since the equations have four unknowns [x, y, z, b]—the three components of GPS receiver position and the clock bias—signals from at least four satellites are necessary to attempt solving these equations. They can be solved by algebraic or numerical methods. Existence and uniqueness of GPS solutions are discussed by Abell and Chaffee. When n is greater than four, this system is overdetermined and a fitting method must be used.
The amount of error in the results varies with the received satellites' locations in the sky, since certain configurations (when the received satellites are close together in the sky) cause larger errors. Receivers usually calculate a running estimate of the error in the calculated position. This is done by multiplying the basic resolution of the receiver by quantities called the geometric dilution of position (GDOP) factors, calculated from the relative sky directions of the satellites used. The receiver location is expressed in a specific coordinate system, such as latitude and longitude using the WGS 84 geodetic datum or a country-specific system.
Geometric interpretation
The GPS equations can be solved by numerical and analytical methods. Geometrical interpretations can enhance the understanding of these solution methods.
Spheres
The measured ranges, called pseudoranges, contain clock errors. In a simplified idealization in which the ranges are synchronized, these true ranges represent the radii of spheres, each centered on one of the transmitting satellites. The solution for the position of the receiver is then at the intersection of the surfaces of these spheres; see trilateration (more generally, true-range multilateration). Signals from at minimum three satellites are required, and their three spheres would typically intersect at two points. One of the points is the location of the receiver, and the other moves rapidly in successive measurements and would not usually be on Earth's surface.
In practice, there are many sources of inaccuracy besides clock bias, including random errors as well as the potential for precision loss from subtracting numbers close to each other if the centers of the spheres are relatively close together. This means that the position calculated from three satellites alone is unlikely to be accurate enough. Data from more satellites can help because of the tendency for random errors to cancel out and also by giving a larger spread between the sphere centers. But at the same time, more spheres will not generally intersect at one point. Therefore, a near intersection gets computed, typically via least squares. The more signals available, the better the approximation is likely to be.
Hyperboloids
If the pseudorange between the receiver and satellite i and the pseudorange between the receiver and satellite j are subtracted, , the common receiver clock bias (b) cancels out, resulting in a difference of distances . The locus of points having a constant difference in distance to two points (here, two satellites) is a hyperbola on a plane and a hyperboloid of revolution (more specifically, a two-sheeted hyperboloid) in 3D space (see Multilateration). Thus, from four pseudorange measurements, the receiver can be placed at the intersection of the surfaces of three hyperboloids each with foci at a pair of satellites. With additional satellites, the multiple intersections are not necessarily unique, and a best-fitting solution is sought instead.
Inscribed sphere
The receiver position can be interpreted as the center of an inscribed sphere (insphere) of radius bc, given by the receiver clock bias b (scaled by the speed of light c). The insphere location is such that it touches other spheres. The circumscribing spheres are centered at the GPS satellites, whose radii equal the measured pseudoranges pi. This configuration is distinct from the one described above, in which the spheres' radii were the unbiased or geometric ranges di.
Hypercones
The clock in the receiver is usually not of the same quality as the ones in the satellites and will not be accurately synchronized to them. This produces pseudoranges with large differences compared to the true distances to the satellites. Therefore, in practice, the time difference between the receiver clock and the satellite time is defined as an unknown clock bias b. The equations are then solved simultaneously for the receiver position and the clock bias. The solution space [x, y, z, b] can be seen as a four-dimensional spacetime, and signals from at minimum four satellites are needed. In that case each of the equations describes a hypercone (or spherical cone), with the cusp located at the satellite, and the base a sphere around the satellite. The receiver is at the intersection of four or more of such hypercones.
Solution methods
Least squares
When more than four satellites are available, the calculation can use the four best, or more than four simultaneously (up to all visible satellites), depending on the number of receiver channels, processing capability, and geometric dilution of precision (GDOP).
Using more than four involves an over-determined system of equations with no unique solution; such a system can be solved by a least-squares or weighted least squares method.
Iterative
Both the equations for four satellites, or the least squares equations for more than four, are non-linear and need special solution methods. A common approach is by iteration on a linearized form of the equations, such as the Gauss–Newton algorithm.
The GPS was initially developed assuming use of a numerical least-squares solution method—i.e., before closed-form solutions were found.
Closed-form
One closed-form solution to the above set of equations was developed by S. Bancroft. Its properties are well known; in particular, proponents claim it is superior in low-GDOP situations, compared to iterative least squares methods.
Bancroft's method is algebraic, as opposed to numerical, and can be used for four or more satellites. When four satellites are used, the key steps are inversion of a 4x4 matrix and solution of a single-variable quadratic equation. Bancroft's method provides one or two solutions for the unknown quantities. When there are two (usually the case), only one is a near-Earth sensible solution.
When a receiver uses more than four satellites for a solution, Bancroft uses the generalized inverse (i.e., the pseudoinverse) to find a solution. A case has been made that iterative methods, such as the Gauss–Newton algorithm approach for solving over-determined non-linear least squares (NLLS) problems, generally provide more accurate solutions.
Leick et al. (2015) states that "Bancroft's (1985) solution is a very early, if not the first, closed-form solution."
Other closed-form solutions were published afterwards, although their adoption in practice is unclear.
Error sources and analysis
GPS error analysis examines error sources in GPS results and the expected size of those errors. GPS makes corrections for receiver clock errors and other effects, but some residual errors remain uncorrected. Error sources include signal arrival time measurements, numerical calculations, atmospheric effects (ionospheric/tropospheric delays), ephemeris and clock data, multipath signals, and natural and artificial interference. Magnitude of residual errors from these sources depends on geometric dilution of precision. Artificial errors may result from jamming devices and threaten ships and aircraft or from intentional signal degradation through selective availability, which limited accuracy to ≈ , but has been switched off since May 1, 2000.
Accuracy enhancement and surveying
Augmentation
Integrating external information into the calculation process can materially improve accuracy. Such augmentation systems are generally named or described based on how the information arrives. Some systems transmit additional error information (such as clock drift, ephemera, or ionospheric delay), others characterize prior errors, while a third group provides additional navigational or vehicle information.
Examples of augmentation systems include the Wide Area Augmentation System (WAAS), European Geostationary Navigation Overlay Service (EGNOS), Differential GPS (DGPS), inertial navigation systems (INS) and Assisted GPS. The standard accuracy of about can be augmented to with DGPS, and to about with WAAS.
Precise monitoring
Accuracy can be improved through precise monitoring and measurement of existing GPS signals in additional or alternative ways.
The largest remaining error is usually the unpredictable delay through the ionosphere. The spacecraft broadcast ionospheric model parameters, but some errors remain. This is one reason GPS spacecraft transmit on at least two frequencies, L1 and L2. Ionospheric delay is a well-defined function of frequency and the total electron content (TEC) along the path, so measuring the arrival time difference between the frequencies determines TEC and thus the precise ionospheric delay at each frequency.
Military receivers can decode the P(Y) code transmitted on both L1 and L2. Without decryption keys, it is still possible to use a codeless technique to compare the P(Y) codes on L1 and L2 to gain much of the same error information. This technique is slow, so it is currently available only on specialized surveying equipment. In the future, additional civilian codes are expected to be transmitted on the L2 and L5 frequencies. All users will then be able to perform dual-frequency measurements and directly compute ionospheric delay errors.
A second form of precise monitoring is called Carrier-Phase Enhancement (CPGPS). This corrects the error that arises because the pulse transition of the PRN is not instantaneous, and thus the correlation (satellite–receiver sequence matching) operation is imperfect. CPGPS uses the L1 carrier wave, which has a period of , which is about one-thousandth of the C/A Gold code bit period of , to act as an additional clock signal and resolve the uncertainty. The phase difference error in the normal GPS amounts to of ambiguity. CPGPS working to within 1% of perfect transition reduces this error to of ambiguity. By eliminating this error source, CPGPS coupled with DGPS normally realizes between of absolute accuracy.
Relative Kinematic Positioning (RKP) is a third alternative for a precise GPS-based positioning system. In this approach, determination of range signal can be resolved to a precision of less than . This is done by resolving the number of cycles that the signal is transmitted and received by the receiver by using a combination of differential GPS (DGPS) correction data, transmitting GPS signal phase information and ambiguity resolution techniques via statistical tests—possibly with processing in real-time (real-time kinematic positioning, RTK).
Carrier phase tracking (surveying)
Another method that is used in surveying applications is carrier phase tracking. The period of the carrier frequency multiplied by the speed of light gives the wavelength, which is about for the L1 carrier. Accuracy within 1% of wavelength in detecting the leading edge reduces this component of pseudorange error to as little as . This compares to for the C/A code and for the P code.
accuracy requires measuring the total phase—the number of waves multiplied by the wavelength plus the fractional wavelength, which requires specially equipped receivers. This method has many surveying applications. It is accurate enough for real-time tracking of the very slow motions of tectonic plates, typically per year.
Triple differencing followed by numerical root finding, and the least squares technique can estimate the position of one receiver given the position of another. First, compute the difference between satellites, then between receivers, and finally between epochs. Other orders of taking differences are equally valid. Detailed discussion of the errors is omitted.
The satellite carrier total phase can be measured with ambiguity as to the number of cycles. Let denote the phase of the carrier of satellite j measured by receiver i at time . This notation shows the meaning of the subscripts i, j, and k. The receiver (r), satellite (s), and time (t) come in alphabetical order as arguments of and to balance readability and conciseness, let be a concise abbreviation. Also we define three functions, :, which return differences between receivers, satellites, and time points, respectively. Each function has variables with three subscripts as its arguments. These three functions are defined below. If is a function of the three integer arguments, i, j, and k then it is a valid argument for the functions, :, with the values defined as
,
, and
.
Also if are valid arguments for the three functions and a and b are constants then
is a valid argument with values defined as
,
, and
.
Receiver clock errors can be approximately eliminated by differencing the phases measured from satellite 1 with that from satellite 2 at the same epoch. This difference is designated as
Double differencing computes the difference of receiver 1's satellite difference from that of receiver 2. This approximately eliminates satellite clock errors. This double difference is:
Triple differencing subtracts the receiver difference from time 1 from that of time 2. This eliminates the ambiguity associated with the integral number of wavelengths in carrier phase provided this ambiguity does not change with time. Thus the triple difference result eliminates practically all clock bias errors and the integer ambiguity. Atmospheric delay and satellite ephemeris errors have been significantly reduced. This triple difference is:
Triple difference results can be used to estimate unknown variables. For example, if the position of receiver 1 is known but the position of receiver 2 unknown, it may be possible to estimate the position of receiver 2 using numerical root finding and least squares. Triple difference results for three independent time pairs may be sufficient to solve for receiver 2's three position components. This may require a numerical procedure. An approximation of receiver 2's position is required to use such a numerical method. This initial value can probably be provided from the navigation message and the intersection of sphere surfaces. Such a reasonable estimate can be key to successful multidimensional root finding. Iterating from three time pairs and a fairly good initial value produces one observed triple difference result for receiver 2's position. Processing additional time pairs can improve accuracy, overdetermining the answer with multiple solutions. Least squares can estimate an overdetermined system. Least squares determines the position of receiver 2 that best fits the observed triple difference results for receiver 2 positions under the criterion of minimizing the sum of the squares.
Regulatory spectrum issues concerning GPS receivers
In the United States, GPS receivers are regulated under the Federal Communications Commission's (FCC) Part 15 rules. As indicated in the manuals of GPS-enabled devices sold in the United States, as a Part 15 device, it "must accept any interference received, including interference that may cause undesired operation." With respect to GPS devices in particular, the FCC states that GPS receiver manufacturers, "must use receivers that reasonably discriminate against reception of signals outside their allocated spectrum." For the last 30 years, GPS receivers have operated next to the Mobile Satellite Service band, and have discriminated against reception of mobile satellite services, such as Inmarsat, without any issue.
The spectrum allocated for GPS L1 use by the FCC is 1559 to 1610 MHz, while the spectrum allocated for satellite-to-ground use owned by Lightsquared is the Mobile Satellite Service band. Since 1996, the FCC has authorized licensed use of the spectrum neighboring the GPS band of 1525 to 1559 MHz to the Virginia company LightSquared. On March 1, 2001, the FCC received an application from LightSquared's predecessor, Motient Services, to use their allocated frequencies for an integrated satellite-terrestrial service. In 2002, the U.S. GPS Industry Council came to an out-of-band-emissions (OOBE) agreement with LightSquared to prevent transmissions from LightSquared's ground-based stations from emitting transmissions into the neighboring GPS band of 1559 to 1610 MHz. In 2004, the FCC adopted the OOBE agreement in its authorization for LightSquared to deploy a ground-based network ancillary to their satellite system – known as the Ancillary Tower Components (ATCs) – "We will authorize MSS ATC subject to conditions that ensure that the added terrestrial component remains ancillary to the principal MSS offering. We do not intend, nor will we permit, the terrestrial component to become a stand-alone service." This authorization was reviewed and approved by the U.S. Interdepartment Radio Advisory Committee, which includes the U.S. Department of Agriculture, U.S. Space Force, U.S. Army, U.S. Coast Guard, Federal Aviation Administration, National Aeronautics and Space Administration (NASA), U.S. Department of the Interior, and U.S. Department of Transportation.
In January 2011, the FCC conditionally authorized LightSquared's wholesale customers—such as Best Buy, Sharp, and C Spire—to only purchase an integrated satellite-ground-based service from LightSquared and re-sell that integrated service on devices that are equipped to only use the ground-based signal using LightSquared's allocated frequencies of 1525 to 1559 MHz. In December 2010, GPS receiver manufacturers expressed concerns to the FCC that LightSquared's signal would interfere with GPS receiver devices although the FCC's policy considerations leading up to the January 2011 order did not pertain to any proposed changes to the maximum number of ground-based LightSquared stations or the maximum power at which these stations could operate. The January 2011 order makes final authorization contingent upon studies of GPS interference issues carried out by a LightSquared led working group along with GPS industry and Federal agency participation. On February 14, 2012, the FCC initiated proceedings to vacate LightSquared's Conditional Waiver Order based on the NTIA's conclusion that there was currently no practical way to mitigate potential GPS interference.
GPS receiver manufacturers design GPS receivers to use spectrum beyond the GPS-allocated band. In some cases, GPS receivers are designed to use up to 400 MHz of spectrum in either direction of the L1 frequency of 1575.42 MHz, because mobile satellite services in those regions are broadcasting from space to ground, and at power levels commensurate with mobile satellite services. As regulated under the FCC's Part 15 rules, GPS receivers are not warranted protection from signals outside GPS-allocated spectrum. This is why GPS operates next to the Mobile Satellite Service band, and also why the Mobile Satellite Service band operates next to GPS. The symbiotic relationship of spectrum allocation ensures that users of both bands are able to operate cooperatively and freely.
The FCC adopted rules in February 2003 that allowed Mobile Satellite Service (MSS) licensees such as LightSquared to construct a small number of ancillary ground-based towers in their licensed spectrum to "promote more efficient use of terrestrial wireless spectrum." In those 2003 rules, the FCC stated "As a preliminary matter, terrestrial [Commercial Mobile Radio Service (“CMRS”)] and MSS ATC are expected to have different prices, coverage, product acceptance and distribution; therefore, the two services appear, at best, to be imperfect substitutes for one another that would be operating in predominantly different market segments... MSS ATC is unlikely to compete directly with terrestrial CMRS for the same customer base...". In 2004, the FCC clarified that the ground-based towers would be ancillary, noting that "We will authorize MSS ATC subject to conditions that ensure that the added terrestrial component remains ancillary to the principal MSS offering. We do not intend, nor will we permit, the terrestrial component to become a stand-alone service." In July 2010, the FCC stated that it expected LightSquared to use its authority to offer an integrated satellite-terrestrial service to "provide mobile broadband services similar to those provided by terrestrial mobile providers and enhance competition in the mobile broadband sector." GPS receiver manufacturers have argued that LightSquared's licensed spectrum of 1525 to 1559 MHz was never envisioned as being used for high-speed wireless broadband based on the 2003 and 2004 FCC ATC rulings making clear that the Ancillary Tower Component (ATC) would be, in fact, ancillary to the primary satellite component. To build public support of efforts to continue the 2004 FCC authorization of LightSquared's ancillary terrestrial component vs. a simple ground-based LTE service in the Mobile Satellite Service band, GPS receiver manufacturer Trimble Navigation Ltd. formed the "Coalition To Save Our GPS."
The FCC and LightSquared have each made public commitments to solve the GPS interference issue before the network is allowed to operate. According to Chris Dancy of the Aircraft Owners and Pilots Association, airline pilots with the type of systems that would be affected "may go off course and not even realize it." The problems could also affect the Federal Aviation Administration upgrade to the air traffic control system, United States Defense Department guidance, and local emergency services including 911.
On February 14, 2012, the FCC moved to bar LightSquared's planned national broadband network after being informed by the National Telecommunications and Information Administration (NTIA), the federal agency that coordinates spectrum uses for the military and other federal government entities, that "there is no practical way to mitigate potential interference at this time". LightSquared is challenging the FCC's action.
Other systems
Other notable satellite navigation systems in use or various states of development include:
Beidou – system deployed and operated by the People's Republic of China's, initiating global services in 2019.
Galileo – a global system being developed by the European Union and other partner countries, which began operation in 2016, and is expected to be fully deployed by 2020.
GLONASS – Russia's global navigation system. Fully operational worldwide.
NavIC – a regional navigation system developed by the Indian Space Research Organisation.
QZSS – a regional navigation system receivable in the Asia-Oceania regions, with a focus on Japan.
See also
List of GPS satellites
GPS satellite blocks
GPS signals
GPS navigation software
GPS/INS
GPS spoofing
Indoor positioning system
Local Area Augmentation System
Local positioning system
Military invention
Mobile phone tracking
Navigation paradox
Notice Advisory to Navstar Users
S-GPS
Notes
References
Further reading
Global Positioning System Open Courseware from MIT, 2012
External links
FAA GPS FAQ
GPS.gov – General public education website created by the U.S. Government
20th-century inventions
Equipment of the United States Space Force
Military equipment introduced in the 1970s |
12808 | https://en.wikipedia.org/wiki/GSM | GSM | The Global System for Mobile Communications (GSM) is a standard developed by the European Telecommunications Standards Institute (ETSI) to describe the protocols for second-generation (2G) digital cellular networks used by mobile devices such as mobile phones and tablets. It was first deployed in Finland in December 1991. By the mid-2010s, it became a global standard for mobile communications achieving over 90% market share, and operating in over 193 countries and territories.
2G networks developed as a replacement for first generation (1G) analog cellular networks. The GSM standard originally described a digital, circuit-switched network optimized for full duplex voice telephony. This expanded over time to include data communications, first by circuit-switched transport, then by packet data transport via General Packet Radio Service (GPRS), and Enhanced Data Rates for GSM Evolution (EDGE).
Subsequently, the 3GPP developed third-generation (3G) UMTS standards, followed by the fourth-generation (4G) LTE Advanced and the fifth-generation 5G standards, which do not form part of the ETSI GSM standard.
"GSM" is a trade mark owned by the GSM Association. It may also refer to the (initially) most common voice codec used, Full Rate.
As a result of the network's widespread use across Europe, the acronym "GSM" was briefly used as a generic term for mobile phones in France, the Netherlands and in Belgium. A great number of people in Belgium still use it to date. Many carriers (like Verizon) will shutdown GSM and CDMA in 2022.
History
Initial development for GSM by Europeans
In 1983, work began to develop a European standard for digital cellular voice telecommunications when the European Conference of Postal and Telecommunications Administrations (CEPT) set up the Groupe Spécial Mobile (GSM) committee and later provided a permanent technical-support group based in Paris. Five years later, in 1987, 15 representatives from 13 European countries signed a memorandum of understanding in Copenhagen to develop and deploy a common cellular telephone system across Europe, and EU rules were passed to make GSM a mandatory standard. The decision to develop a continental standard eventually resulted in a unified, open, standard-based network which was larger than that in the United States.
In February 1987 Europe produced the first agreed GSM Technical Specification. Ministers from the four big EU countries cemented their political support for GSM with the Bonn Declaration on Global Information Networks in May and the GSM MoU was tabled for signature in September. The MoU drew in mobile operators from across Europe to pledge to invest in new GSM networks to an ambitious common date.
In this short 38-week period the whole of Europe (countries and industries) had been brought behind GSM in a rare unity and speed guided by four public officials: Armin Silberhorn (Germany), Stephen Temple (UK), Philippe Dupuis (France), and Renzo Failli (Italy). In 1989 the Groupe Spécial Mobile committee was transferred from CEPT to the European Telecommunications Standards Institute (ETSI).
The IEEE/RSE awarded to Thomas Haug and Philippe Dupuis the 2018 James Clerk Maxwell medal for their "leadership in the development of the first international mobile communications standard with subsequent evolution into worldwide smartphone data communication". The GSM (2G) has evolved into 3G, 4G and 5G.
First networks
In parallel France and Germany signed a joint development agreement in 1984 and were joined by Italy and the UK in 1986. In 1986, the European Commission proposed reserving the 900 MHz spectrum band for GSM. The former Finnish prime minister Harri Holkeri made the world's first GSM call on 1 July 1991, calling Kaarina Suonio (deputy mayor of the city of Tampere) using a network built by Nokia and Siemens and operated by Radiolinja. The following year saw the sending of the first short messaging service (SMS or "text message") message, and Vodafone UK and Telecom Finland signed the first international roaming agreement.
Enhancements
Work began in 1991 to expand the GSM standard to the 1800 MHz frequency band and the first 1800 MHz network became operational in the UK by 1993, called and DCS 1800. Also that year, Telecom Australia became the first network operator to deploy a GSM network outside Europe and the first practical hand-held GSM mobile phone became available.
In 1995 fax, data and SMS messaging services were launched commercially, the first 1900 MHz GSM network became operational in the United States and GSM subscribers worldwide exceeded 10 million. In the same year, the GSM Association formed. Pre-paid GSM SIM cards were launched in 1996 and worldwide GSM subscribers passed 100 million in 1998.
In 2000 the first commercial GPRS services were launched and the first GPRS-compatible handsets became available for sale. In 2001, the first UMTS (W-CDMA) network was launched, a 3G technology that is not part of GSM. Worldwide GSM subscribers exceeded 500 million. In 2002, the first Multimedia Messaging Service (MMS) was introduced and the first GSM network in the 800 MHz frequency band became operational. EDGE services first became operational in a network in 2003, and the number of worldwide GSM subscribers exceeded 1 billion in 2004.
By 2005 GSM networks accounted for more than 75% of the worldwide cellular network market, serving 1.5 billion subscribers. In 2005, the first HSDPA-capable network also became operational. The first HSUPA network launched in 2007. (High-Speed Packet Access (HSPA) and its uplink and downlink versions are 3G technologies, not part of GSM.) Worldwide GSM subscribers exceeded three billion in 2008.
Adoption
The GSM Association estimated in 2011 that technologies defined in the GSM standard served 80% of the mobile market, encompassing more than 5 billion people across more than 212 countries and territories, making GSM the most ubiquitous of the many standards for cellular networks.
GSM is a second-generation (2G) standard employing time-division multiple-access (TDMA) spectrum-sharing, issued by the European Telecommunications Standards Institute (ETSI). The GSM standard does not include the 3G Universal Mobile Telecommunications System (UMTS), code-division multiple access (CDMA) technology, nor the 4G LTE orthogonal frequency-division multiple access (OFDMA) technology standards issued by the 3GPP.
GSM, for the first time, set a common standard for Europe for wireless networks. It was also adopted by many countries outside Europe. This allowed subscribers to use other GSM networks that have roaming agreements with each other. The common standard reduced research and development costs, since hardware and software could be sold with only minor adaptations for the local market.
Discontinuation
Telstra in Australia shut down its 2G GSM network on 1 December 2016, the first mobile network operator to decommission a GSM network. The second mobile provider to shut down its GSM network (on 1 January 2017) was AT&T Mobility from the United States.
Optus in Australia completed the shut down of its 2G GSM network on 1 August 2017, part of the Optus GSM network covering Western Australia and the Northern Territory had earlier in the year been shut down in April 2017.
Singapore shut down 2G services entirely in April 2017.
Technical details
Network structure
The network is structured into several discrete sections:
Base station subsystem – the base stations and their controllers
Network and Switching Subsystem – the part of the network most similar to a fixed network, sometimes just called the "core network"
GPRS Core Network – the optional part which allows packet-based Internet connections
Operations support system (OSS) – network maintenance
Base-station subsystem
GSM utilizes a cellular network, meaning that cell phones connect to it by searching for cells in the immediate vicinity. There are five different cell sizes in a GSM network:
macro
micro
pico
femto, and
umbrella cells
The coverage area of each cell varies according to the implementation environment. Macro cells can be regarded as cells where the base-station antenna is installed on a mast or a building above average rooftop level. Micro cells are cells whose antenna height is under average rooftop level; they are typically deployed in urban areas. Picocells are small cells whose coverage diameter is a few dozen meters; they are mainly used indoors. Femtocells are cells designed for use in residential or small-business environments and connect to a telecommunications service provider's network via a broadband-internet connection. Umbrella cells are used to cover shadowed regions of smaller cells and to fill in gaps in coverage between those cells.
Cell horizontal radius varies – depending on antenna height, antenna gain, and propagation conditions – from a couple of hundred meters to several tens of kilometers. The longest distance the GSM specification supports in practical use is . There are also several implementations of the concept of an extended cell, where the cell radius could be double or even more, depending on the antenna system, the type of terrain, and the timing advance.
GSM supports indoor coverage – achievable by using an indoor picocell base station, or an indoor repeater with distributed indoor antennas fed through power splitters – to deliver the radio signals from an antenna outdoors to the separate indoor distributed antenna system. Picocells are typically deployed when significant call capacity is needed indoors, as in shopping centers or airports. However, this is not a prerequisite, since indoor coverage is also provided by in-building penetration of radio signals from any nearby cell.
GSM carrier frequencies
GSM networks operate in a number of different carrier frequency ranges (separated into GSM frequency ranges for 2G and UMTS frequency bands for 3G), with most 2G GSM networks operating in the 900 MHz or 1800 MHz bands. Where these bands were already allocated, the 850 MHz and 1900 MHz bands were used instead (for example in Canada and the United States). In rare cases the 400 and 450 MHz frequency bands are assigned in some countries because they were previously used for first-generation systems.
For comparison, most 3G networks in Europe operate in the 2100 MHz frequency band. For more information on worldwide GSM frequency usage, see GSM frequency bands.
Regardless of the frequency selected by an operator, it is divided into timeslots for individual phones. This allows eight full-rate or sixteen half-rate speech channels per radio frequency. These eight radio timeslots (or burst periods) are grouped into a TDMA frame. Half-rate channels use alternate frames in the same timeslot. The channel data rate for all is and the frame duration is
The transmission power in the handset is limited to a maximum of 2 watts in and in .
Voice codecs
GSM has used a variety of voice codecs to squeeze 3.1 kHz audio into between 7 and 13 kbit/s. Originally, two codecs, named after the types of data channel they were allocated, were used, called Half Rate (6.5 kbit/s) and Full Rate (13 kbit/s). These used a system based on linear predictive coding (LPC). In addition to being efficient with bitrates, these codecs also made it easier to identify more important parts of the audio, allowing the air interface layer to prioritize and better protect these parts of the signal. GSM was further enhanced in 1997
with the enhanced full rate (EFR) codec, a 12.2 kbit/s codec that uses a full-rate channel. Finally, with the development of UMTS, EFR was refactored into a variable-rate codec called AMR-Narrowband, which is high quality and robust against interference when used on full-rate channels, or less robust but still relatively high quality when used in good radio conditions on half-rate channel.
Subscriber Identity Module (SIM)
One of the key features of GSM is the Subscriber Identity Module, commonly known as a SIM card. The SIM is a detachable smart card containing the user's subscription information and phone book. This allows the user to retain their information after switching handsets. Alternatively, the user can change operators while retaining the handset simply by changing the SIM.
Phone locking
Sometimes mobile network operators restrict handsets that they sell for exclusive use in their own network. This is called SIM locking and is implemented by a software feature of the phone. A subscriber may usually contact the provider to remove the lock for a fee, utilize private services to remove the lock, or use software and websites to unlock the handset themselves. It is possible to hack past a phone locked by a network operator.
In some countries and regions (e.g., Bangladesh, Belgium, Brazil, Canada, Chile, Germany, Hong Kong, India, Iran, Lebanon, Malaysia, Nepal, Norway, Pakistan, Poland, Singapore, South Africa, Sri Lanka, Thailand) all phones are sold unlocked due to the abundance of dual SIM handsets and operators.
GSM security
GSM was intended to be a secure wireless system. It has considered the user authentication using a pre-shared key and challenge-response, and over-the-air encryption. However, GSM is vulnerable to different types of attack, each of them aimed at a different part of the network.
The development of UMTS introduced an optional Universal Subscriber Identity Module (USIM), that uses a longer authentication key to give greater security, as well as mutually authenticating the network and the user, whereas GSM only authenticates the user to the network (and not vice versa). The security model therefore offers confidentiality and authentication, but limited authorization capabilities, and no non-repudiation.
GSM uses several cryptographic algorithms for security. The A5/1, A5/2, and A5/3 stream ciphers are used for ensuring over-the-air voice privacy. A5/1 was developed first and is a stronger algorithm used within Europe and the United States; A5/2 is weaker and used in other countries. Serious weaknesses have been found in both algorithms: it is possible to break A5/2 in real-time with a ciphertext-only attack, and in January 2007, The Hacker's Choice started the A5/1 cracking project with plans to use FPGAs that allow A5/1 to be broken with a rainbow table attack. The system supports multiple algorithms so operators may replace that cipher with a stronger one.
Since 2000, different efforts have been made in order to crack the A5 encryption algorithms. Both A5/1 and A5/2 algorithms have been broken, and their cryptanalysis has been revealed in the literature. As an example, Karsten Nohl developed a number of rainbow tables (static values which reduce the time needed to carry out an attack) and have found new sources for known plaintext attacks. He said that it is possible to build "a full GSM interceptor...from open-source components" but that they had not done so because of legal concerns. Nohl claimed that he was able to intercept voice and text conversations by impersonating another user to listen to voicemail, make calls, or send text messages using a seven-year-old Motorola cellphone and decryption software available for free online.
GSM uses General Packet Radio Service (GPRS) for data transmissions like browsing the web. The most commonly deployed GPRS ciphers were publicly broken in 2011.
The researchers revealed flaws in the commonly used GEA/1 and GEA/2 (standing for GPRS Encryption Algorithms 1 and 2) ciphers and published the open-source "gprsdecode" software for sniffing GPRS networks. They also noted that some carriers do not encrypt the data (i.e., using GEA/0) in order to detect the use of traffic or protocols they do not like (e.g., Skype), leaving customers unprotected. GEA/3 seems to remain relatively hard to break and is said to be in use on some more modern networks. If used with USIM to prevent connections to fake base stations and downgrade attacks, users will be protected in the medium term, though migration to 128-bit GEA/4 is still recommended.
The first public cryptanalysis of GEA/1 and GEA/2 (also written GEA-1 and GEA-2) was done in 2021. It concluded that although using a 64-bit key, the GEA-1 algorithm actually provides only 40 bits of security, due to a relationship between two parts of the algorithm. The researchers found that this relationship was very unlikely to have happened if it wasn't intentional. This may have been done in order to satisfy European controls on export of cryptographic programs.
Standards information
The GSM systems and services are described in a set of standards governed by ETSI, where a full list is maintained.
GSM open-source software
Several open-source software projects exist that provide certain GSM features:
gsmd daemon by Openmoko
OpenBTS develops a Base transceiver station
The GSM Software Project aims to build a GSM analyzer for less than $1,000
OsmocomBB developers intend to replace the proprietary baseband GSM stack with a free software implementation
YateBTS develops a Base transceiver station
Issues with patents and open source
Patents remain a problem for any open-source GSM implementation, because it is not possible for GNU or any other free software distributor to guarantee immunity from all lawsuits by the patent holders against the users. Furthermore, new features are being added to the standard all the time which means they have patent protection for a number of years.
The original GSM implementations from 1991 may now be entirely free of patent encumbrances, however patent freedom is not certain due to the United States' "first to invent" system that was in place until 2012. The "first to invent" system, coupled with "patent term adjustment" can extend the life of a U.S. patent far beyond 20 years from its priority date. It is unclear at this time whether OpenBTS will be able to implement features of that initial specification without limit. As patents subsequently expire, however, those features can be added into the open-source version. , there have been no lawsuits against users of OpenBTS over GSM use.
See also
Cellular network
Enhanced Data Rates for GSM Evolution (EDGE)
Enhanced Network Selection (ENS)
GSM forwarding standard features codes – list of call forward codes working with all operators and phones
GSM frequency bands
GSM modem
GSM services
Cell Broadcast
GSM localization
Multimedia Messaging Service (MMS)
NITZ Network Identity and Time Zone
Wireless Application Protocol (WAP)
GSM-R (GSM-Railway)
GSM USSD codes – Unstructured Supplementary Service Data: list of all standard GSM codes for network and SIM related functions
Handoff
High-Speed Downlink Packet Access (HSDPA)
International Mobile Equipment Identity (IMEI)
International Mobile Subscriber Identity (IMSI)
Long Term Evolution (LTE)
MSISDN Mobile Subscriber ISDN Number
Nordic Mobile Telephone (NMT)
ORFS
Personal communications network (PCN)
RTP audio video profile
Simulation of GSM networks
Standards
Comparison of mobile phone standards
GEO-Mobile Radio Interface
GSM 02.07 – Cellphone features
GSM 03.48 – Security mechanisms for the SIM application toolkit
Intelligent Network
Parlay X
RRLP – Radio Resource Location Protocol
Um interface
Visitors Location Register (VLR)
References
Further reading
External links
GSM Association—Official industry trade group representing GSM network operators worldwide
3GPP—3G GSM standards development group
LTE-3GPP.info: online GSM messages decoder fully supporting all 3GPP releases from early GSM to latest 5G
Telecommunications-related introductions in 1991
GSM standard |
12884 | https://en.wikipedia.org/wiki/GCHQ | GCHQ | Government Communications Headquarters, commonly known as GCHQ, is an intelligence and security organisation responsible for providing signals intelligence (SIGINT) and information assurance (IA) to the government and armed forces of the United Kingdom. Based at "The Doughnut" in the suburbs of Cheltenham, GCHQ is the responsibility of the country's Secretary of State for Foreign and Commonwealth Affairs (Foreign Secretary), but it is not a part of the Foreign Office and its Director ranks as a Permanent Secretary.
GCHQ was originally established after the First World War as the Government Code and Cypher School (GC&CS) and was known under that name until 1946. During the Second World War it was located at Bletchley Park, where it was responsible for breaking the German Enigma codes. There are two main components of the GCHQ, the Composite Signals Organisation (CSO), which is responsible for gathering information, and the National Cyber Security Centre (NCSC), which is responsible for securing the UK's own communications. The Joint Technical Language Service (JTLS) is a small department and cross-government resource responsible for mainly technical language support and translation and interpreting services across government departments. It is co-located with GCHQ for administrative purposes.
In 2013, GCHQ received considerable media attention when the former National Security Agency contractor Edward Snowden revealed that the agency was in the process of collecting all online and telephone data in the UK via the Tempora programme. Snowden's revelations began a spate of ongoing disclosures of global surveillance. The Guardian newspaper was then forced to destroy all files Snowden had given them because of the threats of a lawsuit under the Official Secrets Act.
Structure
GCHQ is led by the Director of GCHQ, Jeremy Fleming, and a Corporate Board, made up of executive and non-executive directors. Reporting to the Corporate Board are:
Sigint missions: comprising maths and cryptanalysis, IT and computer systems, linguistics and translation, and the intelligence analysis unit
Enterprise: comprising applied research and emerging technologies, corporate knowledge and information systems, commercial supplier relationships, and biometrics
Corporate management: enterprise resource planning, human resources, internal audit, and architecture
National Cyber Security Centre (NCSC).
History
Government Code and Cypher School (GC&CS)
During the First World War, the British Army and Royal Navy had separate signals intelligence agencies, MI1b and NID25 (initially known as Room 40) respectively. In 1919, the Cabinet's Secret Service Committee, chaired by Lord Curzon, recommended that a peacetime codebreaking agency should be created, a task which was given to the Director of Naval Intelligence, Hugh Sinclair. Sinclair merged staff from NID25 and MI1b into the new organisation, which initially consisted of around 25–30 officers and a similar number of clerical staff. It was titled the "Government Code and Cypher School" (GC&CS), a cover-name which was chosen by Victor Forbes of the Foreign Office. Alastair Denniston, who had been a member of NID25, was appointed as its operational head. It was initially under the control of the Admiralty and located in Watergate House, Adelphi, London. Its public function was "to advise as to the security of codes and cyphers used by all Government departments and to assist in their provision", but also had a secret directive to "study the methods of cypher communications used by foreign powers". GC&CS officially formed on 1 November 1919, and produced its first decrypt prior to that date, on 19 October.
Before the Second World War, GC&CS was a relatively small department. By 1922, the main focus of GC&CS was on diplomatic traffic, with "no service traffic ever worth circulating" and so, at the initiative of Lord Curzon, it was transferred from the Admiralty to the Foreign Office. GC&CS came under the supervision of Hugh Sinclair, who by 1923 was both the Chief of SIS and Director of GC&CS. In 1925, both organisations were co-located on different floors of Broadway Buildings, opposite St. James's Park. Messages decrypted by GC&CS were distributed in blue-jacketed files that became known as "BJs". In the 1920s, GC&CS was successfully reading Soviet Union diplomatic cyphers. However, in May 1927, during a row over clandestine Soviet support for the General Strike and the distribution of subversive propaganda, Prime Minister Stanley Baldwin made details from the decrypts public.
During the Second World War, GC&CS was based largely at Bletchley Park, in present-day Milton Keynes, working on understanding the German Enigma machine and Lorenz ciphers. In 1940, GC&CS was working on the diplomatic codes and ciphers of 26 countries, tackling over 150 diplomatic cryptosystems. Senior staff included Alastair Denniston, Oliver Strachey, Dilly Knox, John Tiltman, Edward Travis, Ernst Fetterlein, Josh Cooper, Donald Michie, Alan Turing, Gordon Welchman, Joan Clarke, Max Newman, William Tutte, I. J. (Jack) Good, Peter Calvocoressi and Hugh Foss.
An outstation in the Far East, the Far East Combined Bureau was set up in Hong Kong in 1935 and moved to Singapore in 1939. Subsequently, with the Japanese advance down the Malay Peninsula, the Army and RAF codebreakers went to the Wireless Experimental Centre in Delhi, India. The Navy codebreakers in FECB went to Colombo, Ceylon, then to Kilindini, near Mombasa, Kenya.
Post Second World War
GC&CS was renamed the Government Communications Headquarters (GCHQ) in June 1946.
The organisation was at first based in Eastcote in northwest London, then in 1951 moved to the outskirts of Cheltenham, setting up two sites at Oakley and Benhall. One of the major reasons for selecting Cheltenham was that the town had been the location of the headquarters of the United States Army Services of Supply for the European Theater during the War, which built up a telecommunications infrastructure in the region to carry out its logistics tasks.
Following the Second World War, US and British intelligence have shared information as part of the UKUSA Agreement. The principal aspect of this is that GCHQ and its US equivalent, the National Security Agency (NSA), share technologies, infrastructure and information.
GCHQ ran many signals intelligence (SIGINT) monitoring stations abroad. During the early Cold War, the remnants of the British Empire provided a global network of ground stations which were a major contribution to the UKUSA Agreement; the US regarded RAF Little Sai Wan in Hong Kong as the most valuable of these. The monitoring stations were largely run by inexpensive National Service recruits, but when this ended in the early 1960s, the increased cost of civilian employees caused budgetary problems. In 1965 a Foreign Office review found that 11,500 staff were involved in SIGINT collection (8,000 GCHQ staff and 3,500 military personnel), exceeding the size of the Diplomatic Service. Reaction to the Suez War led to the eviction of GCHQ from several of its best foreign SIGINT collection sites, including the new Perkar, Ceylon site and RAF Habbaniya, Iraq. The staff largely moved to tented encampments on military bases in Cyprus, which later became the Sovereign Base Area.
Duncan Campbell and Mark Hosenball revealed the existence of GCHQ in 1976 in an article for Time Out; as a result, Hosenball was deported from the UK. GCHQ had a very low profile in the media until 1983 when the trial of Geoffrey Prime, a KGB mole within it, created considerable media interest.
Trade union disputes
In 1984, GCHQ was the centre of a political row when, in the wake of strikes which affected Sigint collection, the Conservative government of Margaret Thatcher prohibited its employees from belonging to a trade union. Following the breakdown of talks and the failure to negotiate a no-strike agreement, it was believed that membership of a union would be in conflict with national security. A number of mass national one-day strikes were held to protest this decision, claimed by some as the first step to wider bans on trade unions. Appeals to British Courts and European Commission of Human Rights were unsuccessful. The government offered a sum of money to each employee who agreed to give up their union membership. Appeal to the ILO resulted in a decision that government's actions were in violation of Freedom of Association and Protection of the Right to Organise Convention.
A no-strike agreement was eventually negotiated and the ban lifted by the incoming Labour government in 1997, with the Government Communications Group of the Public and Commercial Services Union (PCS) being formed to represent interested employees at all grades. In 2000, a group of 14 former GCHQ employees, who had been dismissed after refusing to give up their union membership, were offered re-employment, which three of them accepted.
Post Cold War
1990s: Post-Cold War restructuring
The Intelligence Services Act 1994 formalised the activities of the intelligence agencies for the first time, defining their purpose, and the British Parliament's Intelligence and Security Committee was given a remit to examine the expenditure, administration and policy of the three intelligence agencies. The objectives of GCHQ were defined as working as "in the interests of national security, with particular reference to the defence and foreign policies of Her Majesty's government; in the interests of the economic wellbeing of the United Kingdom; and in support of the prevention and the detection of serious crime". During the introduction of the Intelligence Agency Act in late 1993, the former Prime Minister Jim Callaghan had described GCHQ as a "full-blown bureaucracy", adding that future bodies created to provide oversight of the intelligence agencies should "investigate whether all the functions that GCHQ carries out today are still necessary."
In late 1993 civil servant Michael Quinlan advised a deep review of the work of GCHQ following the conclusion of his "Review of Intelligence Requirements and Resources", which had imposed a 3% cut on the agency. The Chief Secretary to the Treasury, Jonathan Aitken, subsequently held face to face discussions with the intelligence agency directors to assess further savings in the wake of Quinlan's review. Aldrich (2010) suggests that Sir John Adye, the then Director of GCHQ performed badly in meetings with Aitken, leading Aitken to conclude that GCHQ was "suffering from out-of-date methods of management and out-of-date methods for assessing priorities". GCHQ's budget was £850 million in 1993, (£ as of ) compared to £125 million for the Security Service and SIS (MI5 and MI6). In December 1994 the businessman Roger Hurn was commissioned to begin a review of GCHQ, which was concluded in March 1995. Hurn's report recommended a cut of £100 million in GCHQ's budget; such a large reduction had not been suffered by any British intelligence agency since the end of World War II. The J Division of GCHQ, which had collected SIGINT on Russia, disappeared as a result of the cuts. The cuts had been mostly reversed by 2000 in the wake of threats from violent non-state actors, and risks from increased terrorism, organised crime and illegal access to nuclear, chemical and biological weapons.
David Omand became the Director of GCHQ in 1996, and greatly restructured the agency in the face of new and changing targets and rapid technological change. Omand introduced the concept of "Sinews" (or "SIGINT New Systems") which allowed more flexible working methods, avoiding overlaps in work by creating fourteen domains, each with a well-defined working scope. The tenure of Omand also saw the construction of a modern new headquarters, intended to consolidate the two old sites at Oakley and Benhall into a single, more open-plan work environment. Located on a 176-acre site in Benhall, it would be the largest building constructed for secret intelligence operations outside the United States.
Operations at GCHQ's Chung Hom Kok listening station in Hong Kong ended in 1994. GCHQ's Hong Kong operations were extremely important to their relationship with the NSA, who contributed investment and equipment to the station. In anticipation of the transfer of Hong Kong to the Chinese government in 1997, the Hong Kong stations operations were moved to Australian Defence Satellite Communications Station in Geraldton in Western Australia.
Operations that used GCHQ's intelligence-gathering capabilities in the 1990s included the monitoring of communications of Iraqi soldiers in the Gulf War, of dissident republican terrorists and the Real IRA, of the various factions involved in the Yugoslav Wars, and of the criminal Kenneth Noye. In the mid 1990s GCHQ began to assist in the investigation of cybercrime.
2000s: Coping with the Internet
At the end of 2003, GCHQ moved in to its new building. Built on a circular plan around a large central courtyard, it quickly became known as the Doughnut. At the time, it was one of the largest public-sector building projects in Europe, with an estimated cost of £337 million. The new building, which was designed by Gensler and constructed by Carillion, became the base for all of GCHQ's Cheltenham operations.
The public spotlight fell on GCHQ in late 2003 and early 2004 following the sacking of Katharine Gun after she leaked to The Observer a confidential email from agents at the United States' National Security Agency addressed to GCHQ agents about the wiretapping of UN delegates in the run-up to the 2003 Iraq war.
GCHQ gains its intelligence by monitoring a wide variety of communications and other electronic signals. For this, a number of stations have been established in the UK and overseas. The listening stations are at Cheltenham itself, Bude, Scarborough, Ascension Island, and with the United States at Menwith Hill. Ayios Nikolaos Station in Cyprus is run by the British Army for GCHQ.
In March 2010, GCHQ was criticised by the Intelligence and Security Committee for problems with its IT security practices and failing to meet its targets for work targeted against cyber attacks.
As revealed by Edward Snowden in The Guardian, GCHQ spied on foreign politicians visiting the 2009 G-20 London Summit by eavesdropping phonecalls and emails and monitoring their computers, and in some cases even ongoing after the summit via keyloggers that had been installed during the summit.
According to Edward Snowden, at that time GCHQ had two principal umbrella programs for collecting communications:
"Mastering the Internet" (MTI) for Internet traffic, which is extracted from fibre-optic cables and can be searched by using the Tempora computer system.
"Global Telecoms Exploitation" (GTE) for telephone traffic.
GCHQ has also had access to the US internet monitoring programme PRISM from at least as far back as June 2010. PRISM is said to give the National Security Agency and FBI easy access to the systems of nine of the world's top internet companies, including Google, Facebook, Microsoft, Apple, Yahoo, and Skype.
From 2013, GCHQ realised that public attitudes to Sigint had changed and its former unquestioned secrecy was no longer appropriate or acceptable. The growing use of the Internet, together with its inherent insecurities, meant that the communications traffic of private citizens were becoming inextricably mixed with those of their targets and openness in the handling of this issue was becoming essential to their credibility as an organisation. The Internet had become a "cyber commons", with its dominance creating a "second age of Sigint". GCHQ transformed itself accordingly, including greatly expanded Public Relations and Legal departments, and adopting public education in cyber security as an important part of its remit.
2010s
In February 2014, The Guardian, based on documents provided by Snowden, revealed that GCHQ had indiscriminately collected 1.8 million private Yahoo webcam images from users across the world. In the same month NBC and The Intercept, based on documents released by Snowden, revealed the Joint Threat Research Intelligence Group and the Computer Network Exploitation units within GCHQ. Their mission was cyber operations based on "dirty tricks" to shut down enemy communications, discredit, and plant misinformation on enemies. These operations were 5% of all GCHQ operations according to a conference slideshow presented by the GCHQ.
Soon after becoming Director of GCHQ in 2014, Robert Hannigan wrote an article in the Financial Times on the topic of internet surveillance, stating that "however much [large US technology companies] may dislike it, they have become the command and control networks of choice for terrorists and criminals" and that GCHQ and its sister agencies "cannot tackle these challenges at scale without greater support from the private sector", arguing that most internet users "would be comfortable with a better and more sustainable relationship between the [intelligence] agencies and the tech companies". Since the 2013 global surveillance disclosures, large US technology companies have improved security and become less co-operative with foreign intelligence agencies, including those of the UK, generally requiring a US court order before disclosing data. However the head of the UK technology industry group techUK rejected these claims, stating that they understood the issues but that disclosure obligations "must be based upon a clear and transparent legal framework and effective oversight rather than, as suggested, a deal between the industry and government".
In 2015, documents obtained by The Intercept from US National Security Agency whistleblower Edward Snowden revealed that GCHQ had carried out a mass-surveillance operation, codenamed KARMA POLICE, since about 2008. The operation swept up the IP address of Internet users visiting websites, and was established with no public scrutiny or oversight. KARMA POLICE is a powerful spying tool in conjunction with other GCHQ programs because IP addresses could be cross-referenced with other data. The goal of the program, according to the documents, was "either (a) a web browsing profile for every visible user on the internet, or (b) a user profile for every visible website on the internet."
In 2015, GCHQ admitted for the first time in court that it conducts computer hacking.
In 2017, US Press Secretary Sean Spicer alleged that GCHQ had conducted surveillance on US President Donald Trump, basing the allegation on statements made by a media commentator during a Fox News segment. The US government formally apologised for the allegations and promised they would not be repeated. However, surveillance of Russian agents did pick up contacts made by Trump's campaign team in the run-up to his election, which were passed on to US agencies.
On 31 October 2018, GCHQ joined Instagram.
Security mission
As well as a mission to gather intelligence, GCHQ has for a long-time had a corresponding mission to assist in the protection of the British government's own communications. When the Government Code and Cypher School (GC&CS) was created in 1919, its overt task was providing security advice. GC&CS's Security section was located in Mansfield College, Oxford during the Second World War.
In April 1946, GC&CS became GCHQ, and the now GCHQ Security section moved from Oxford to join the rest of the organisation at Eastcote later that year.
LCSA
From 1952 to 1954, the intelligence mission of GCHQ relocated to Cheltenham; the Security section remained at Eastcote, and in March 1954 became a separate, independent organisation: the London Communications Security Agency (LCSA), which in 1958 was renamed to the London Communications-Electronic Security Agency (LCESA).
In April 1965, GPO and MOD units merged with LCESA to become the Communications-Electronic Security Department (CESD).
CESG
In October 1969, CESD was merged into GCHQ and becoming Communications-Electronic Security Group (CESG).
In 1977 CESG relocated from Eastcote to Cheltenham.
CESG continued as the UK National Technical Authority for information assurance, including cryptography. CESG did not manufacture security equipment, but worked with industry to ensure the availability of suitable products and services, while GCHQ itself funded research into such areas, for example to the Centre for Quantum Computation at Oxford University and the Heilbronn Institute for Mathematical Research at the University of Bristol.
In the 21st century, CESG ran a number of assurance schemes such as CHECK, CLAS, Commercial Product Assurance (CPA) and CESG Assisted Products Service (CAPS).
Public key encryption
In late 1969 the concept for public-key encryption was developed and proven by James H. Ellis, who had worked for CESG (and before it, CESD) since 1965. Ellis lacked the number theory expertise necessary to build a workable system. Subsequently, a feasible implementation scheme via an asymmetric key algorithm was invented by another staff member Clifford Cocks, a mathematics graduate. This fact was kept secret until 1997.
NCSC
In 2016, the National Cyber Security Centre was established under GCHQ but located in London, as the UK's authority on cybersecurity. It absorbed and replaced CESG as well as activities that had previously existed outside GCHQ: the Centre for Cyber Assessment (CCA), Computer Emergency Response Team UK (CERT UK) and the cyber-related responsibilities of the Centre for the Protection of National Infrastructure (CPNI).
Joint Technical Language Service
The Joint Technical Language Service (JTLS) was established in 1955, drawing on members of the small Ministry of Defence technical language team and others, initially to provide standard English translations for organisational expressions in any foreign language, discover the correct English equivalents of technical terms in foreign languages and discover the correct expansions of abbreviations in any language. The remit of the JTLS has expanded in the ensuing years to cover technical language support and interpreting and translation services across the UK Government and to local public sector services in Gloucestershire and surrounding counties. The JTLS also produces and publishes foreign language working aids under crown copyright and conducts research into machine translation and on-line dictionaries and glossaries. The JTLS is co-located with GCHQ for administrative purposes.
International relationships
GCHQ operates in partnership with equivalent agencies worldwide in a number of bi-lateral and multi-lateral relationships. The principal of these is with the United States (National Security Agency), Canada (Communications Security Establishment), Australia (Australian Signals Directorate) and New Zealand (Government Communications Security Bureau), through the mechanism of the UK-US Security Agreement, a broad intelligence-sharing agreement encompassing a range of intelligence collection methods. Relationships are alleged to include shared collection methods, such as the system described in the popular media as ECHELON, as well as analysed product.
Legal basis
GCHQ's legal basis is enshrined in the Intelligence Services Act 1994 Section 3 as follows:
Activities that involve interception of communications are permitted under the Regulation of Investigatory Powers Act 2000; this kind of interception can only be carried out after a warrant has been issued by a Secretary of State. The Human Rights Act 1998 requires the intelligence agencies, including GCHQ, to respect citizens' rights as described in the European Convention on Human Rights.
Oversight
The Prime Minister nominates cross-party Members of Parliament to an Intelligence and Security Committee. The remit of the Committee includes oversight of intelligence and security activities and reports are made directly to Parliament. Its functions were increased under the Justice and Security Act 2013 to provide for further access and investigatory powers.
Judicial oversight of GCHQ's conduct is exercised by the Investigatory Powers Tribunal. The UK also has an independent Intelligence Services Commissioner and Interception of Communications Commissioner, both of whom are former senior judges.
The Investigatory Powers Tribunal ruled in December 2014 that GCHQ does not breach the European Convention of Human Rights, and that its activities are compliant with Articles 8 (right to privacy) and 10 (freedom of expression) of the European Convention of Human Rights. However, the Tribunal stated in February 2015 that one particular aspect, the data-sharing arrangement that allowed UK Intelligence services to request data from the US surveillance programmes Prism and Upstream, had been in contravention of human rights law prior to this until two paragraphs of additional information, providing details about the procedures and safeguards, were disclosed to the public in December 2014.
Furthermore, the IPT ruled that the legislative framework in the United Kingdom does not permit mass surveillance and that while GCHQ collects and analyses data in bulk, it does not practice mass surveillance. This complements independent reports by the Interception of Communications Commissioner, and a special report made by the Intelligence and Security Committee of Parliament; although several shortcomings and potential improvements to both oversight and the legislative framework were highlighted.
Abuses
Despite the inherent secrecy around much of GCHQ's work, investigations carried out by the UK government after the Snowden disclosures have admitted various abuses by the security services. A report by the Intelligence and Security Committee (ISC) in 2015 revealed that a small number of staff at UK intelligence agencies had been found to misuse their surveillance powers, in one case leading to the dismissal of a member of staff at GCHQ, although there were no laws in place at the time to make these abuses a criminal offence.
Later that year, a ruling by the Investigatory Powers Tribunal found that GCHQ acted unlawfully in conducting surveillance on two human rights organisations. The closed hearing found the government in breach of its internal surveillance policies in accessing and retaining the communications of the Egyptian Initiative for Personal Rights and the Legal Resources Centre in South Africa. This was only the second time in the IPT's history that it had made a positive determination in favour of applicants after a closed session.
At another IPT case in 2015, GCHQ conceded that "from January 2010, the regime for the interception/obtaining, analysis, use, disclosure and destruction of legally privileged material has not been in accordance with the law for the purposes of Article 8(2) of the European convention on human rights and was accordingly unlawful". This admission was made in connection with a case brought against them by Abdelhakim Belhaj, a Libyan opponent of the former Gaddafi regime, and his wife Fatima Bouchard. The couple accused British ministers and officials of participating in their unlawful abduction, kidnapping and removal to Libya in March 2004, while Gaddafi was still in power.
On 25 May 2021, the European Court of Human Rights (ECHR) ruled that the GCHQ is guilty of violating data privacy rules through their bulk interception of communications, and does not provide sufficient protections for confidential journalistic material because it gathers communications in bulk.
Surveillance of parliamentarians
In 2015 there was a complaint by Green Party MP Caroline Lucas that British intelligence services, including GCHQ, had been spying on MPs allegedly "in defiance of laws prohibiting it."
Then-Home Secretary, Theresa May, had told Parliament in 2014 that:
The Investigatory Powers Tribunal investigated the complaint, and ruled that contrary to the allegation, there was no law that gave the communications of parliament any special protection. The Wilson Doctrine merely acts as a political convention.
Constitutional legal case
A controversial GCHQ case determined the scope of judicial review of prerogative powers (the Crown's residual powers under common law). This was Council of Civil Service Unions v Minister for the Civil Service [1985] AC 374 (often known simply as the "GCHQ case"). In this case, a prerogative Order in Council had been used by the prime minister (who is the Minister for the Civil Service) to ban trade union activities by civil servants working at GCHQ. This order was issued without consultation. The House of Lords had to decide whether this was reviewable by judicial review. It was held that executive action is not immune from judicial review simply because it uses powers derived from common law rather than statute (thus the prerogative is reviewable).
Leadership
The following is a list of the heads and operational heads of GCHQ and GC&CS:
Sir Hugh Sinclair KCB (1919 - 1939) (Founder)
Alastair Denniston CMG CBE (1921 – February 1942) (Operational Head)
Sir Edward Travis KCMG CBE (February 1942 – 1952)
Sir Eric Jones KCMG CB CBE (April 1952 – 1960)
Sir Clive Loehnis KCMG (1960–1964)
Sir Leonard Hooper KCMG CBE (1965–1973)
Sir Arthur Bonsall KCMG CBE (1973–1978)
Sir Brian John Maynard Tovey KCMG (1978–1983)
Sir Peter Marychurch KCMG (1983–1989)
Sir John Anthony Adye KCMG (1989–1996)
Sir David Omand GCB (1996 –1997)
Sir Kevin Tebbit KCB CMG (1998)
Sir Francis Richards KCMG CVO DL (1998–2003)
Sir David Pepper KCMG (2003–2008)
Sir Iain Lobban KCMG CB (2008–2014)
Robert Hannigan CMG (2014–2017)
Sir Jeremy Fleming KCMG CB (2017–present)
Stations and former stations
The following are stations and former stations that have operated since the Cold War.
Current
United Kingdom
GCHQ Bude, Cornwall
GCHQ Cheltenham, Gloucestershire (Headquarters)
GCHQ London
GCHQ Manchester
GCHQ Scarborough, North Yorkshire
RAF Digby, Lincolnshire
RAF Menwith Hill, North Yorkshire
Overseas
GCHQ Ascension Island
GCHQ Cyprus
Former
United Kingdom
GCHQ Brora, Sutherland
GCHQ Cheadle, Staffordshire
GCHQ Culmhead, Somerset
GCHQ Hawklaw, Fife
Overseas
GCHQ Hong Kong
GCHQ Certified Training
The GCHQ Certified Training (GCT) scheme was established to certify two main levels of cybersecurity training. There are also degree and masters level courses. These are:
Awareness Level Training: giving an understanding and a foundation in cybersecurity concepts; and
Application Level Training: a more in-depth course
The GCT scheme was designed to help organisations find the right training that also met GCHQ's exacting standards. It was designed to assure high-quality cybersecurity training courses where the training provider had also undergone rigorous quality checks. The GCT process is carried out by APMG as the independent certification body. The scheme is part of the National Cyber Security Programme established by the Government to develop knowledge, skills and capability in all aspects of cybersecurity in the, and is based on the IISP Skills Framework.
In popular culture
The historical drama film The Imitation Game (2014) featured Benedict Cumberbatch portraying Alan Turing's efforts to break the Enigma code while employed by the Government Code and Cypher School.
GCHQ have set a number of cryptic online challenges to the public, used to attract interest and for recruitment, starting in late 1999. The response to the 2004 challenge was described as "excellent", and the challenge set in 2015 had over 600,000 attempts. It also published the GCHQ puzzle book in 2016 which sold more than 300,000 copies, with the proceeds going to charity. A second book was published in October 2018.
GCHQ appeared on the Doctor Who 2019 special "Resolution" where the Reconnaissance Scout Dalek storms the facility and exterminates the staff in order to use the organisation's resources to summon a Dalek fleet.
GCHQ is the setting of the 2020 Sky One sitcom Intelligence, featuring David Schwimmer as an incompetent American NSA officer liaising with GCHQ's Cyber Crimes unit.
See also
GCHQ units:
Joint Operations Cell
National Cyber Security Centre
GCHQ specifics:
Capenhurst – said to be home to a GCHQ monitoring site in the 1990s
Hugh Alexander – head of the cryptanalysis division at GCHQ from 1949 to 1971
Operation Socialist, a 2010–13 operation in Belgium
Zircon, the 1980s cancelled GCHQ satellite project
UK agencies:
British intelligence agencies
Joint Forces Intelligence Group
RAF Intelligence
UK cyber security community
Elsewhere:
Signals intelligence by alliances, nations and industries
NSA – equivalent United States organisation
Notes and references
Bibliography
External links
Her Majesty's Government Communications Centre
GovCertUK
GCHQ: Britain's Most Secret Intelligence Agency
BBC: A final look at GCHQ's top secret Oakley site in Cheltenham
INCENSER, or how NSA and GCHQ are tapping internet cables
1919 establishments in the United Kingdom
British intelligence agencies
Computer security organizations
Cryptography organizations
Foreign relations of the United Kingdom
Government agencies established in 1919
Organisations based in Cheltenham
Signals intelligence agencies
Foreign Office during World War II
Organizations associated with Russian interference in the 2016 United States elections
Headquarters in the United Kingdom |
13564 | https://en.wikipedia.org/wiki/Homomorphism | Homomorphism | In algebra, a homomorphism is a structure-preserving map between two algebraic structures of the same type (such as two groups, two rings, or two vector spaces). The word homomorphism comes from the Ancient Greek language: () meaning "same" and () meaning "form" or "shape". However, the word was apparently introduced to mathematics due to a (mis)translation of German meaning "similar" to meaning "same". The term "homomorphism" appeared as early as 1892, when it was attributed to the German mathematician Felix Klein (1849–1925).
Homomorphisms of vector spaces are also called linear maps, and their study is the object of linear algebra.
The concept of homomorphism has been generalized, under the name of morphism, to many other structures that either do not have an underlying set, or are not algebraic. This generalization is the starting point of category theory.
A homomorphism may also be an isomorphism, an endomorphism, an automorphism, etc. (see below). Each of those can be defined in a way that may be generalized to any class of morphisms.
Definition
A homomorphism is a map between two algebraic structures of the same type (that is of the same name), that preserves the operations of the structures. This means a map between two sets , equipped with the same structure such that, if is an operation of the structure (supposed here, for simplification, to be a binary operation), then
for every pair , of elements of . One says often that preserves the operation or is compatible with the operation.
Formally, a map preserves an operation of arity k, defined on both and if
for all elements in .
The operations that must be preserved by a homomorphism include 0-ary operations, that is the constants. In particular, when an identity element is required by the type of structure, the identity element of the first structure must be mapped to the corresponding identity element of the second structure.
For example:
A semigroup homomorphism is a map between semigroups that preserves the semigroup operation.
A monoid homomorphism is a map between monoids that preserves the monoid operation and maps the identity element of the first monoid to that of the second monoid (the identity element is a 0-ary operation).
A group homomorphism is a map between groups that preserves the group operation. This implies that the group homomorphism maps the identity element of the first group to the identity element of the second group, and maps the inverse of an element of the first group to the inverse of the image of this element. Thus a semigroup homomorphism between groups is necessarily a group homomorphism.
A ring homomorphism is a map between rings that preserves the ring addition, the ring multiplication, and the multiplicative identity. Whether the multiplicative identity is to be preserved depends upon the definition of ring in use. If the multiplicative identity is not preserved, one has a rng homomorphism.
A linear map is a homomorphism of vector spaces; that is, a group homomorphism between vector spaces that preserves the abelian group structure and scalar multiplication.
A module homomorphism, also called a linear map between modules, is defined similarly.
An algebra homomorphism is a map that preserves the algebra operations.
An algebraic structure may have more than one operation, and a homomorphism is required to preserve each operation. Thus a map that preserves only some of the operations is not a homomorphism of the structure, but only a homomorphism of the substructure obtained by considering only the preserved operations. For example, a map between monoids that preserves the monoid operation and not the identity element, is not a monoid homomorphism, but only a semigroup homomorphism.
The notation for the operations does not need to be the same in the source and the target of a homomorphism. For example, the real numbers form a group for addition, and the positive real numbers form a group for multiplication. The exponential function
satisfies
and is thus a homomorphism between these two groups. It is even an isomorphism (see below), as its inverse function, the natural logarithm, satisfies
and is also a group homomorphism.
Examples
The real numbers are a ring, having both addition and multiplication. The set of all 2×2 matrices is also a ring, under matrix addition and matrix multiplication. If we define a function between these rings as follows:
where is a real number, then is a homomorphism of rings, since preserves both addition:
and multiplication:
For another example, the nonzero complex numbers form a group under the operation of multiplication, as do the nonzero real numbers. (Zero must be excluded from both groups since it does not have a multiplicative inverse, which is required for elements of a group.) Define a function from the nonzero complex numbers to the nonzero real numbers by
That is, is the absolute value (or modulus) of the complex number . Then is a homomorphism of groups, since it preserves multiplication:
Note that cannot be extended to a homomorphism of rings (from the complex numbers to the real numbers), since it does not preserve addition:
As another example, the diagram shows a monoid homomorphism from the monoid to the monoid . Due to the different names of corresponding operations, the structure preservation properties satisfied by amount to and .
A composition algebra over a field has a quadratic form, called a norm, , which is a group homomorphism from the multiplicative group of to the multiplicative group of .
Special homomorphisms
Several kinds of homomorphisms have a specific name, which is also defined for general morphisms.
Isomorphism
An isomorphism between algebraic structures of the same type is commonly defined as a bijective homomorphism.
In the more general context of category theory, an isomorphism is defined as a morphism that has an inverse that is also a morphism. In the specific case of algebraic structures, the two definitions are equivalent, although they may differ for non-algebraic structures, which have an underlying set.
More precisely, if
is a (homo)morphism, it has an inverse if there exists a homomorphism
such that
If and have underlying sets, and has an inverse , then is bijective. In fact, is injective, as implies , and is surjective, as, for any in , one has , and is the image of an element of .
Conversely, if is a bijective homomorphism between algebraic structures, let be the map such that is the unique element of such that . One has and it remains only to show that is a homomorphism. If is a binary operation of the structure, for every pair , of elements of , one has
and is thus compatible with As the proof is similar for any arity, this shows that is a homomorphism.
This proof does not work for non-algebraic structures. For examples, for topological spaces, a morphism is a continuous map, and the inverse of a bijective continuous map is not necessarily continuous. An isomorphism of topological spaces, called homeomorphism or bicontinuous map, is thus a bijective continuous map, whose inverse is also continuous.
Endomorphism
An endomorphism is a homomorphism whose domain equals the codomain, or, more generally, a morphism whose source is equal to the target.
The endomorphisms of an algebraic structure, or of an object of a category form a monoid under composition.
The endomorphisms of a vector space or of a module form a ring. In the case of a vector space or a free module of finite dimension, the choice of a basis induces a ring isomorphism between the ring of endomorphisms and the ring of square matrices of the same dimension.
Automorphism
An automorphism is an endomorphism that is also an isomorphism.
The automorphisms of an algebraic structure or of an object of a category form a group under composition, which is called the automorphism group of the structure.
Many groups that have received a name are automorphism groups of some algebraic structure. For example, the general linear group is the automorphism group of a vector space of dimension over a field .
The automorphism groups of fields were introduced by Évariste Galois for studying the roots of polynomials, and are the basis of Galois theory.
Monomorphism
For algebraic structures, monomorphisms are commonly defined as injective homomorphisms.
In the more general context of category theory, a monomorphism is defined as a morphism that is left cancelable. This means that a (homo)morphism is a monomorphism if, for any pair , of morphisms from any other object to , then implies .
These two definitions of monomorphism are equivalent for all common algebraic structures. More precisely, they are equivalent for fields, for which every homomorphism is a monomorphism, and for varieties of universal algebra, that is algebraic structures for which operations and axioms (identities) are defined without any restriction (fields are not a variety, as the multiplicative inverse is defined either as a unary operation or as a property of the multiplication, which are, in both cases, defined only for nonzero elements).
In particular, the two definitions of a monomorphism are equivalent for sets, magmas, semigroups, monoids, groups, rings, fields, vector spaces and modules.
A split monomorphism is a homomorphism that has a left inverse and thus it is itself a right inverse of that other homomorphism. That is, a homomorphism is a split monomorphism if there exists a homomorphism such that A split monomorphism is always a monomorphism, for both meanings of monomorphism. For sets and vector spaces, every monomorphism is a split monomorphism, but this property does not hold for most common algebraic structures.
An injective homomorphism is left cancelable: If one has for every in , the common source of and . If is injective, then , and thus . This proof works not only for algebraic structures, but also for any category whose objects are sets and arrows are maps between these sets. For example, an injective continuous map is a monomorphism in the category of topological spaces.
For proving that, conversely, a left cancelable homomorphism is injective, it is useful to consider a free object on . Given a variety of algebraic structures a free object on is a pair consisting of an algebraic structure of this variety and an element of satisfying the following universal property: for every structure of the variety, and every element of , there is a unique homomorphism such that . For example, for sets, the free object on is simply ; for semigroups, the free object on is which, as, a semigroup, is isomorphic to the additive semigroup of the positive integers; for monoids, the free object on is which, as, a monoid, is isomorphic to the additive monoid of the nonnegative integers; for groups, the free object on is the infinite cyclic group which, as, a group, is isomorphic to the additive group of the integers; for rings, the free object on } is the polynomial ring for vector spaces or modules, the free object on is the vector space or free module that has as a basis.
If a free object over exists, then every left cancelable homomorphism is injective: let be a left cancelable homomorphism, and and be two elements of such . By definition of the free object , there exist homomorphisms and from to such that and . As , one has by the uniqueness in the definition of a universal property. As is left cancelable, one has , and thus . Therefore, is injective.
Existence of a free object on for a variety (see also ): For building a free object over , consider the set of the well-formed formulas built up from and the operations of the structure. Two such formulas are said equivalent if one may pass from one to the other by applying the axioms (identities of the structure). This defines an equivalence relation, if the identities are not subject to conditions, that is if one works with a variety. Then the operations of the variety are well defined on the set of equivalence classes of for this relation. It is straightforward to show that the resulting object is a free object on .
Epimorphism
In algebra, epimorphisms are often defined as surjective homomorphisms. On the other hand, in category theory, epimorphisms are defined as right cancelable morphisms. This means that a (homo)morphism is an epimorphism if, for any pair , of morphisms from to any other object , the equality implies .
A surjective homomorphism is always right cancelable, but the converse is not always true for algebraic structures. However, the two definitions of epimorphism are equivalent for sets, vector spaces, abelian groups, modules (see below for a proof), and groups. The importance of these structures in all mathematics, and specially in linear algebra and homological algebra, may explain the coexistence of two non-equivalent definitions.
Algebraic structures for which there exist non-surjective epimorphisms include semigroups and rings. The most basic example is the inclusion of integers into rational numbers, which is an homomorphism of rings and of multiplicative semigroups. For both structures it is a monomorphism and a non-surjective epimorphism, but not an isomorphism.
A wide generalization of this example is the localization of a ring by a multiplicative set. Every localization is a ring epimorphism, which is not, in general, surjective. As localizations are fundamental in commutative algebra and algebraic geometry, this may explain why in these areas, the definition of epimorphisms as right cancelable homomorphisms is generally preferred.
A split epimorphism is a homomorphism that has a right inverse and thus it is itself a left inverse of that other homomorphism. That is, a homomorphism is a split epimorphism if there exists a homomorphism such that A split epimorphism is always an epimorphism, for both meanings of epimorphism. For sets and vector spaces, every epimorphism is a split epimorphism, but this property does not hold for most common algebraic structures.
In summary, one has
the last implication is an equivalence for sets, vector spaces, modules and abelian groups; the first implication is an equivalence for sets and vector spaces.
Let be a homomorphism. We want to prove that if it is not surjective, it is not right cancelable.
In the case of sets, let be an element of that not belongs to , and define such that is the identity function, and that for every except that is any other element of . Clearly is not right cancelable, as and
In the case of vector spaces, abelian groups and modules, the proof relies on the existence of cokernels and on the fact that the zero maps are homomorphisms: let be the cokernel of , and be the canonical map, such that . Let be the zero map. If is not surjective, , and thus (one is a zero map, while the other is not). Thus is not cancelable, as (both are the zero map from to ).
Kernel
Any homomorphism defines an equivalence relation on by if and only if . The relation is called the kernel of . It is a congruence relation on . The quotient set can then be given a structure of the same type as , in a natural way, by defining the operations of the quotient set by , for each operation of . In that case the image of in under the homomorphism is necessarily isomorphic to ; this fact is one of the isomorphism theorems.
When the algebraic structure is a group for some operation, the equivalence class of the identity element of this operation suffices to characterize the equivalence relation. In this case, the quotient by the equivalence relation is denoted by (usually read as " mod "). Also in this case, it is , rather than , that is called the kernel of . The kernels of homomorphisms of a given type of algebraic structure are naturally equipped with some structure. This structure type of the kernels is the same as the considered structure, in the case of abelian groups, vector spaces and modules, but is different and has received a specific name in other cases, such as normal subgroup for kernels of group homomorphisms and ideals for kernels of ring homomorphisms (in the case of non-commutative rings, the kernels are the two-sided ideals).
Relational structures
In model theory, the notion of an algebraic structure is generalized to structures involving both operations and relations. Let L be a signature consisting of function and relation symbols, and A, B be two L-structures. Then a homomorphism from A to B is a mapping h from the domain of A to the domain of B such that
h(FA(a1,…,an)) = FB(h(a1),…,h(an)) for each n-ary function symbol F in L,
RA(a1,…,an) implies RB(h(a1),…,h(an)) for each n-ary relation symbol R in L.
In the special case with just one binary relation, we obtain the notion of a graph homomorphism. For a detailed discussion of relational homomorphisms and isomorphisms see.
Formal language theory
Homomorphisms are also used in the study of formal languages and are often briefly referred to as morphisms. Given alphabets Σ1 and Σ2, a function such that for all u and v in Σ1∗ is called a homomorphism on Σ1∗. If h is a homomorphism on Σ1∗ and ε denotes the empty string, then h is called an ε-free homomorphism when for all in Σ1∗.
The set Σ∗ of words formed from the alphabet Σ may be thought of as the free monoid generated by Σ. Here the monoid operation is concatenation and the identity element is the empty word. From this perspective, a language homomorphism is precisely a monoid homomorphism.
See also
Diffeomorphism
Homomorphic encryption
Homomorphic secret sharing – a simplistic decentralized voting protocol
Morphism
Quasimorphism
Notes
Citations
References
Morphisms |
13586 | https://en.wikipedia.org/wiki/HTTPS | HTTPS | Hypertext Transfer Protocol Secure (HTTPS) is an extension of the Hypertext Transfer Protocol (HTTP). It is used for secure communication over a computer network, and is widely used on the Internet. In HTTPS, the communication protocol is encrypted using Transport Layer Security (TLS) or, formerly, Secure Sockets Layer (SSL). The protocol is therefore also referred to as HTTP over TLS, or HTTP over SSL.
The principal motivations for HTTPS are authentication of the accessed website, and protection of the privacy and integrity of the exchanged data while in transit. It protects against man-in-the-middle attacks, and the bidirectional encryption of communications between a client and server protects the communications against eavesdropping and tampering. The authentication aspect of HTTPS requires a trusted third party to sign server-side digital certificates. This was historically an expensive operation, which meant fully authenticated HTTPS connections were usually found only on secured payment transaction services and other secured corporate information systems on the World Wide Web. In 2016, a campaign by the Electronic Frontier Foundation with the support of web browser developers led to the protocol becoming more prevalent. HTTPS is now used more often by web users than the original non-secure HTTP, primarily to protect page authenticity on all types of websites; secure accounts; and to keep user communications, identity, and web browsing private.
Overview
The Uniform Resource Identifier (URI) scheme HTTPS has identical usage syntax to the HTTP scheme. However, HTTPS signals the browser to use an added encryption layer of SSL/TLS to protect the traffic. SSL/TLS is especially suited for HTTP, since it can provide some protection even if only one side of the communication is authenticated. This is the case with HTTP transactions over the Internet, where typically only the server is authenticated (by the client examining the server's certificate).
HTTPS creates a secure channel over an insecure network. This ensures reasonable protection from eavesdroppers and man-in-the-middle attacks, provided that adequate cipher suites are used and that the server certificate is verified and trusted.
Because HTTPS piggybacks HTTP entirely on top of TLS, the entirety of the underlying HTTP protocol can be encrypted. This includes the request URL (which particular web page was requested), query parameters, headers, and cookies (which often contain identifying information about the user). However, because website addresses and port numbers are necessarily part of the underlying TCP/IP protocols, HTTPS cannot protect their disclosure. In practice this means that even on a correctly configured web server, eavesdroppers can infer the IP address and port number of the web server, and sometimes even the domain name (e.g. www.example.org, but not the rest of the URL) that a user is communicating with, along with the amount of data transferred and the duration of the communication, though not the content of the communication.
Web browsers know how to trust HTTPS websites based on certificate authorities that come pre-installed in their software. Certificate authorities are in this way being trusted by web browser creators to provide valid certificates. Therefore, a user should trust an HTTPS connection to a website if and only if all of the following are true:
The user trusts that the browser software correctly implements HTTPS with correctly pre-installed certificate authorities.
The user trusts the certificate authority to vouch only for legitimate websites.
The website provides a valid certificate, which means it was signed by a trusted authority.
The certificate correctly identifies the website (e.g., when the browser visits "https://example.com", the received certificate is properly for "example.com" and not some other entity).
The user trusts that the protocol's encryption layer (SSL/TLS) is sufficiently secure against eavesdroppers.
HTTPS is especially important over insecure networks and networks that may be subject to tampering. Insecure networks, such as public Wi-Fi access points, allow anyone on the same local network to packet-sniff and discover sensitive information not protected by HTTPS. Additionally, some free-to-use and paid WLAN networks have been observed tampering with webpages by engaging in packet injection in order to serve their own ads on other websites. This practice can be exploited maliciously in many ways, such as by injecting malware onto webpages and stealing users' private information.
HTTPS is also important for connections over the Tor network, as malicious Tor nodes could otherwise damage or alter the contents passing through them in an insecure fashion and inject malware into the connection. This is one reason why the Electronic Frontier Foundation and the Tor Project started the development of HTTPS Everywhere, which is included in Tor Browser.
As more information is revealed about global mass surveillance and criminals stealing personal information, the use of HTTPS security on all websites is becoming increasingly important regardless of the type of Internet connection being used. Even though metadata about individual pages that a user visits might not be considered sensitive, when aggregated it can reveal a lot about the user and compromise the user's privacy.
Deploying HTTPS also allows the use of HTTP/2 (or its predecessor, the now-deprecated protocol SPDY), which is a new generation of HTTP designed to reduce page load times, size, and latency.
It is recommended to use HTTP Strict Transport Security (HSTS) with HTTPS to protect users from man-in-the-middle attacks, especially SSL stripping.
HTTPS should not be confused with the seldom-used Secure HTTP (S-HTTP) specified in RFC 2660.
Usage in websites
, 33.2% of Alexa top 1,000,000 websites use HTTPS as default, 57.1% of the Internet's 137,971 most popular websites have a secure implementation of HTTPS, and 70% of page loads (measured by Firefox Telemetry) use HTTPS.
Browser integration
Most browsers display a warning if they receive an invalid certificate. Older browsers, when connecting to a site with an invalid certificate, would present the user with a dialog box asking whether they wanted to continue. Newer browsers display a warning across the entire window. Newer browsers also prominently display the site's security information in the address bar. Extended validation certificates show the legal entity on the certificate information. Most browsers also display a warning to the user when visiting a site that contains a mixture of encrypted and unencrypted content. Additionally, many web filters return a security warning when visiting prohibited websites.
The Electronic Frontier Foundation, opining that "In an ideal world, every web request could be defaulted to HTTPS", has provided an add-on called HTTPS Everywhere for Mozilla Firefox, Google Chrome, Chromium, and Android, that enables HTTPS by default for hundreds of frequently used websites.
Forcing a web browser to load HTTPS content only has been supported in Firefox starting in version 83.
Security
The security of HTTPS is that of the underlying TLS, which typically uses long-term public and private keys to generate a short-term session key, which is then used to encrypt the data flow between the client and the server. X.509 certificates are used to authenticate the server (and sometimes the client as well). As a consequence, certificate authorities and public key certificates are necessary to verify the relation between the certificate and its owner, as well as to generate, sign, and administer the validity of certificates. While this can be more beneficial than verifying the identities via a web of trust, the 2013 mass surveillance disclosures drew attention to certificate authorities as a potential weak point allowing man-in-the-middle attacks. An important property in this context is forward secrecy, which ensures that encrypted communications recorded in the past cannot be retrieved and decrypted should long-term secret keys or passwords be compromised in the future. Not all web servers provide forward secrecy.
For HTTPS to be effective, a site must be completely hosted over HTTPS. If some of the site's contents are loaded over HTTP (scripts or images, for example), or if only a certain page that contains sensitive information, such as a log-in page, is loaded over HTTPS while the rest of the site is loaded over plain HTTP, the user will be vulnerable to attacks and surveillance. Additionally, cookies on a site served through HTTPS must have the secure attribute enabled. On a site that has sensitive information on it, the user and the session will get exposed every time that site is accessed with HTTP instead of HTTPS.
Technical
Difference from HTTP
HTTPS URLs begin with "https://" and use port 443 by default, whereas, HTTP URLs begin with "http://" and use port 80 by default.
HTTP is not encrypted and thus is vulnerable to man-in-the-middle and eavesdropping attacks, which can let attackers gain access to website accounts and sensitive information, and modify webpages to inject malware or advertisements. HTTPS is designed to withstand such attacks and is considered secure against them (with the exception of HTTPS implementations that use deprecated versions of SSL).
Network layers
HTTP operates at the highest layer of the TCP/IP model—the application layer; as does the TLS security protocol (operating as a lower sublayer of the same layer), which encrypts an HTTP message prior to transmission and decrypts a message upon arrival. Strictly speaking, HTTPS is not a separate protocol, but refers to the use of ordinary HTTP over an encrypted SSL/TLS connection.
HTTPS encrypts all message contents, including the HTTP headers and the request/response data. With the exception of the possible CCA cryptographic attack described in the limitations section below, an attacker should at most be able to discover that a connection is taking place between two parties, along with their domain names and IP addresses.
Server setup
To prepare a web server to accept HTTPS connections, the administrator must create a public key certificate for the web server. This certificate must be signed by a trusted certificate authority for the web browser to accept it without warning. The authority certifies that the certificate holder is the operator of the web server that presents it. Web browsers are generally distributed with a list of signing certificates of major certificate authorities so that they can verify certificates signed by them.
Acquiring certificates
A number of commercial certificate authorities exist, offering paid-for SSL/TLS certificates of a number of types, including Extended Validation Certificates.
Let's Encrypt, launched in April 2016, provides free and automated service that delivers basic SSL/TLS certificates to websites. According to the Electronic Frontier Foundation, Let's Encrypt will make switching from HTTP to HTTPS "as easy as issuing one command, or clicking one button." The majority of web hosts and cloud providers now leverage Let's Encrypt, providing free certificates to their customers.
Use as access control
The system can also be used for client authentication in order to limit access to a web server to authorized users. To do this, the site administrator typically creates a certificate for each user, which the user loads into their browser. Normally, the certificate contains the name and e-mail address of the authorized user and is automatically checked by the server on each connection to verify the user's identity, potentially without even requiring a password.
In case of compromised secret (private) key
An important property in this context is perfect forward secrecy (PFS). Possessing one of the long-term asymmetric secret keys used to establish an HTTPS session should not make it easier to derive the short-term session key to then decrypt the conversation, even at a later time. Diffie–Hellman key exchange (DHE) and Elliptic curve Diffie–Hellman key exchange (ECDHE) are in 2013 the only schemes known to have that property. In 2013, only 30% of Firefox, Opera, and Chromium Browser sessions used it, and nearly 0% of Apple's Safari and Microsoft Internet Explorer sessions. TLS 1.3, published in August 2018, dropped support for ciphers without forward secrecy. , 96.6% of web servers surveyed support some form of forward secrecy, and 52.1% will use forward secrecy with most browsers.
Certificate revocation
A certificate may be revoked before it expires, for example because the secrecy of the private key has been compromised. Newer versions of popular browsers such as Firefox, Opera, and Internet Explorer on Windows Vista implement the Online Certificate Status Protocol (OCSP) to verify that this is not the case. The browser sends the certificate's serial number to the certificate authority or its delegate via OCSP (Online Certificate Status Protocol) and the authority responds, telling the browser whether the certificate is still valid or not. The CA may also issue a CRL to tell people that these certificates are revoked. CRLs are no longer required by the CA/Browser forum, nevertheless, they are still widely used by the CAs. Most revocation statuses on the Internet disappear soon after the expiration of the certificates.
Limitations
SSL (Secure Sockets Layer) and TLS (Transport Layer Security) encryption can be configured in two modes: simple and mutual. In simple mode, authentication is only performed by the server. The mutual version requires the user to install a personal client certificate in the web browser for user authentication. In either case, the level of protection depends on the correctness of the implementation of the software and the cryptographic algorithms in use.
SSL/TLS does not prevent the indexing of the site by a web crawler, and in some cases the URI of the encrypted resource can be inferred by knowing only the intercepted request/response size. This allows an attacker to have access to the plaintext (the publicly available static content), and the encrypted text (the encrypted version of the static content), permitting a cryptographic attack.
Because TLS operates at a protocol level below that of HTTP and has no knowledge of the higher-level protocols, TLS servers can only strictly present one certificate for a particular address and port combination. In the past, this meant that it was not feasible to use name-based virtual hosting with HTTPS. A solution called Server Name Indication (SNI) exists, which sends the hostname to the server before encrypting the connection, although many old browsers do not support this extension. Support for SNI is available since Firefox 2, Opera 8, Apple Safari 2.1, Google Chrome 6, and Internet Explorer 7 on Windows Vista.
From an architectural point of view:
An SSL/TLS connection is managed by the first front machine that initiates the TLS connection. If, for any reasons (routing, traffic optimization, etc.), this front machine is not the application server and it has to decipher data, solutions have to be found to propagate user authentication information or certificate to the application server, which needs to know who is going to be connected.
For SSL/TLS with mutual authentication, the SSL/TLS session is managed by the first server that initiates the connection. In situations where encryption has to be propagated along chained servers, session timeOut management becomes extremely tricky to implement.
Security is maximal with mutual SSL/TLS, but on the client-side there is no way to properly end the SSL/TLS connection and disconnect the user except by waiting for the server session to expire or by closing all related client applications.
A sophisticated type of man-in-the-middle attack called SSL stripping was presented at the 2009 Blackhat Conference. This type of attack defeats the security provided by HTTPS by changing the link into an link, taking advantage of the fact that few Internet users actually type "https" into their browser interface: they get to a secure site by clicking on a link, and thus are fooled into thinking that they are using HTTPS when in fact they are using HTTP. The attacker then communicates in clear with the client. This prompted the development of a countermeasure in HTTP called HTTP Strict Transport Security.
HTTPS has been shown to be vulnerable to a range of traffic analysis attacks. Traffic analysis attacks are a type of side-channel attack that relies on variations in the timing and size of traffic in order to infer properties about the encrypted traffic itself. Traffic analysis is possible because SSL/TLS encryption changes the contents of traffic, but has minimal impact on the size and timing of traffic. In May 2010, a research paper by researchers from Microsoft Research and Indiana University discovered that detailed sensitive user data can be inferred from side channels such as packet sizes. The researchers found that, despite HTTPS protection in several high-profile, top-of-the-line web applications in healthcare, taxation, investment, and web search, an eavesdropper could infer the illnesses/medications/surgeries of the user, his/her family income, and investment secrets. Although this work demonstrated the vulnerability of HTTPS to traffic analysis, the approach presented by the authors required manual analysis and focused specifically on web applications protected by HTTPS.
The fact that most modern websites, including Google, Yahoo!, and Amazon, use HTTPS causes problems for many users trying to access public Wi-Fi hot spots, because a Wi-Fi hot spot login page fails to load if the user tries to open an HTTPS resource. Several websites, such as neverssl.com and nonhttps.com, guarantee that they will always remain accessible by HTTP.
History
Netscape Communications created HTTPS in 1994 for its Netscape Navigator web browser. Originally, HTTPS was used with the SSL protocol. As SSL evolved into Transport Layer Security (TLS), HTTPS was formally specified by RFC 2818 in May 2000. Google announced in February 2018 that its Chrome browser would mark HTTP sites as "Not Secure" after July 2018. This move was to encourage website owners to implement HTTPS, as an effort to make the World Wide Web more secure.
See also
Bullrun (decryption program) a secret anti-encryption program run by the US National Security Agency
Computer security
HSTS
Opportunistic encryption
Stunnel
References
External links
RFC 2818: HTTP Over TLS
RFC 5246: The Transport Layer Security Protocol 1.2
RFC 6101: The Secure Sockets Layer (SSL) Protocol Version 3.0
How HTTPS works ...in a comic!
Is TLS fast yet?
Hypertext Transfer Protocol
Cryptographic protocols
Secure communication
URI schemes
Transport Layer Security
Computer-related introductions in 1994
Network booting |
13636 | https://en.wikipedia.org/wiki/History%20of%20computing%20hardware | History of computing hardware | The history of computing hardware covers the developments from early simple devices to aid calculation to modern day computers. Before the 20th century, most calculations were done by humans. Early mechanical tools to help humans with digital calculations, like the abacus, were referred to as calculating machines or calculators (and other proprietary names). The machine operator was called the computer.
The first aids to computation were purely mechanical devices which required the operator to set up the initial values of an elementary arithmetic operation, then manipulate the device to obtain the result. Later, computers represented numbers in a continuous form (e.g. distance along a scale, rotation of a shaft, or a voltage). Numbers could also be represented in the form of digits, automatically manipulated by a mechanism. Although this approach generally required more complex mechanisms, it greatly increased the precision of results. The development of transistor technology and then the integrated circuit chip led to a series of breakthroughs, starting with transistor computers and then integrated circuit computers, causing digital computers to largely replace analog computers. Metal-oxide-semiconductor (MOS) large-scale integration (LSI) then enabled semiconductor memory and the microprocessor, leading to another key breakthrough, the miniaturized personal computer (PC), in the 1970s. The cost of computers gradually became so low that personal computers by the 1990s, and then mobile computers (smartphones and tablets) in the 2000s, became ubiquitous.
Early devices
Ancient and medieval
Devices have been used to aid computation for thousands of years, mostly using one-to-one correspondence with fingers. The earliest counting device was probably a form of tally stick. The Lebombo bone from the mountains between Eswatini and South Africa may be the oldest known mathematical artifact. It dates from 35,000 BCE and consists of 29 distinct notches that were deliberately cut into a baboon's fibula. Later record keeping aids throughout the Fertile Crescent included calculi (clay spheres, cones, etc.) which represented counts of items, probably livestock or grains, sealed in hollow unbaked clay containers. The use of counting rods is one example. The abacus was early used for arithmetic tasks. What we now call the Roman abacus was used in Babylonia as early as c. 2700–2300 BC. Since then, many other forms of reckoning boards or tables have been invented. In a medieval European counting house, a checkered cloth would be placed on a table, and markers moved around on it according to certain rules, as an aid to calculating sums of money.
Several analog computers were constructed in ancient and medieval times to perform astronomical calculations. These included the astrolabe and Antikythera mechanism from the Hellenistic world (c. 150–100 BC). In Roman Egypt, Hero of Alexandria (c. 10–70 AD) made mechanical devices including automata and a programmable cart. Other early mechanical devices used to perform one or another type of calculations include the planisphere and other mechanical computing devices invented by Abu Rayhan al-Biruni (c. AD 1000); the equatorium and universal latitude-independent astrolabe by Abū Ishāq Ibrāhīm al-Zarqālī (c. AD 1015); the astronomical analog computers of other medieval Muslim astronomers and engineers; and the astronomical clock tower of Su Song (1094) during the Song dynasty. The castle clock, a hydropowered mechanical astronomical clock invented by Ismail al-Jazari in 1206, was the first programmable analog computer. Ramon Llull invented the Lullian Circle: a notional machine for calculating answers to philosophical questions (in this case, to do with Christianity) via logical combinatorics. This idea was taken up by Leibniz centuries later, and is thus one of the founding elements in computing and information science.
Renaissance calculating tools
Scottish mathematician and physicist John Napier discovered that the multiplication and division of numbers could be performed by the addition and subtraction, respectively, of the logarithms of those numbers. While producing the first logarithmic tables, Napier needed to perform many tedious multiplications. It was at this point that he designed his 'Napier's bones', an abacus-like device that greatly simplified calculations that involved multiplication and division.
Since real numbers can be represented as distances or intervals on a line, the slide rule was invented in the 1620s, shortly after Napier's work, to allow multiplication and division operations to be carried out significantly faster than was previously possible. Edmund Gunter built a calculating device with a single logarithmic scale at the University of Oxford. His device greatly simplified arithmetic calculations, including multiplication and division. William Oughtred greatly improved this in 1630 with his circular slide rule. He followed this up with the modern slide rule in 1632, essentially a combination of two Gunter rules, held together with the hands. Slide rules were used by generations of engineers and other mathematically involved professional workers, until the invention of the pocket calculator.
Mechanical calculators
Wilhelm Schickard, a German polymath, designed a calculating machine in 1623 which combined a mechanised form of Napier's rods with the world's first mechanical adding machine built into the base. Because it made use of a single-tooth gear there were circumstances in which its carry mechanism would jam. A fire destroyed at least one of the machines in 1624 and it is believed Schickard was too disheartened to build another.
In 1642, while still a teenager, Blaise Pascal started some pioneering work on calculating machines and after three years of effort and 50 prototypes he invented a mechanical calculator. He built twenty of these machines (called Pascal's calculator or Pascaline) in the following ten years. Nine Pascalines have survived, most of which are on display in European museums. A continuing debate exists over whether Schickard or Pascal should be regarded as the "inventor of the mechanical calculator" and the range of issues to be considered is discussed elsewhere.
Gottfried Wilhelm von Leibniz invented the stepped reckoner and his famous stepped drum mechanism around 1672. He attempted to create a machine that could be used not only for addition and subtraction but would utilise a moveable carriage to enable long multiplication and division. Leibniz once said "It is unworthy of excellent men to lose hours like slaves in the labour of calculation which could safely be relegated to anyone else if machines were used." However, Leibniz did not incorporate a fully successful carry mechanism. Leibniz also described the binary numeral system, a central ingredient of all modern computers. However, up to the 1940s, many subsequent designs (including Charles Babbage's machines of the 1822 and even ENIAC of 1945) were based on the decimal system.
Around 1820, Charles Xavier Thomas de Colmar created what would over the rest of the century become the first successful, mass-produced mechanical calculator, the Thomas Arithmometer. It could be used to add and subtract, and with a moveable carriage the operator could also multiply, and divide by a process of long multiplication and long division. It utilised a stepped drum similar in conception to that invented by Leibniz. Mechanical calculators remained in use until the 1970s.
Punched-card data processing
In 1804, French weaver Joseph Marie Jacquard developed a loom in which the pattern being woven was controlled by a paper tape constructed from punched cards. The paper tape could be changed without changing the mechanical design of the loom. This was a landmark achievement in programmability. His machine was an improvement over similar weaving looms. Punched cards were preceded by punch bands, as in the machine proposed by Basile Bouchon. These bands would inspire information recording for automatic pianos and more recently numerical control machine tools.
In the late 1880s, the American Herman Hollerith invented data storage on punched cards that could then be read by a machine. To process these punched cards, he invented the tabulator and the keypunch machine. His machines used electromechanical relays and counters. Hollerith's method was used in the 1890 United States Census. That census was processed two years faster than the prior census had been. Hollerith's company eventually became the core of IBM.
By 1920, electromechanical tabulating machines could add, subtract, and print accumulated totals. Machine functions were directed by inserting dozens of wire jumpers into removable control panels. When the United States instituted Social Security in 1935, IBM punched-card systems were used to process records of 26 million workers. Punched cards became ubiquitous in industry and government for accounting and administration.
Leslie Comrie's articles on punched-card methods and W. J. Eckert's publication of Punched Card Methods in Scientific Computation in 1940, described punched-card techniques sufficiently advanced to solve some differential equations or perform multiplication and division using floating point representations, all on punched cards and unit record machines. Such machines were used during World War II for cryptographic statistical processing, as well as a vast number of administrative uses. The Astronomical Computing Bureau, Columbia University, performed astronomical calculations representing the state of the art in computing.
Calculators
By the 20th century, earlier mechanical calculators, cash registers, accounting machines, and so on were redesigned to use electric motors, with gear position as the representation for the state of a variable. The word "computer" was a job title assigned to primarily women who used these calculators to perform mathematical calculations. By the 1920s, British scientist Lewis Fry Richardson's interest in weather prediction led him to propose human computers and numerical analysis to model the weather; to this day, the most powerful computers on Earth are needed to adequately model its weather using the Navier–Stokes equations.
Companies like Friden, Marchant Calculator and Monroe made desktop mechanical calculators from the 1930s that could add, subtract, multiply and divide. In 1948, the Curta was introduced by Austrian inventor Curt Herzstark. It was a small, hand-cranked mechanical calculator and as such, a descendant of Gottfried Leibniz's Stepped Reckoner and Thomas' Arithmometer.
The world's first all-electronic desktop calculator was the British Bell Punch ANITA, released in 1961. It used vacuum tubes, cold-cathode tubes and Dekatrons in its circuits, with 12 cold-cathode "Nixie" tubes for its display. The ANITA sold well since it was the only electronic desktop calculator available, and was silent and quick. The tube technology was superseded in June 1963 by the U.S. manufactured Friden EC-130, which had an all-transistor design, a stack of four 13-digit numbers displayed on a CRT, and introduced reverse Polish notation (RPN).
First general-purpose computing device
Charles Babbage, an English mechanical engineer and polymath, originated the concept of a programmable computer. Considered the "father of the computer", he conceptualized and invented the first mechanical computer in the early 19th century. After working on his revolutionary difference engine, designed to aid in navigational calculations, in 1833 he realized that a much more general design, an Analytical Engine, was possible. The input of programs and data was to be provided to the machine via punched cards, a method being used at the time to direct mechanical looms such as the Jacquard loom. For output, the machine would have a printer, a curve plotter and a bell. The machine would also be able to punch numbers onto cards to be read in later. It employed ordinary base-10 fixed-point arithmetic.
The Engine incorporated an arithmetic logic unit, control flow in the form of conditional branching and loops, and integrated memory, making it the first design for a general-purpose computer that could be described in modern terms as Turing-complete.
There was to be a store, or memory, capable of holding 1,000 numbers of 40 decimal digits each (ca. 16.7 kB). An arithmetical unit, called the "mill", would be able to perform all four arithmetic operations, plus comparisons and optionally square roots. Initially it was conceived as a difference engine curved back upon itself, in a generally circular layout, with the long store exiting off to one side. (Later drawings depict a regularized grid layout.) Like the central processing unit (CPU) in a modern computer, the mill would rely on its own internal procedures, roughly equivalent to microcode in modern CPUs, to be stored in the form of pegs inserted into rotating drums called "barrels", to carry out some of the more complex instructions the user's program might specify.
The programming language to be employed by users was akin to modern day assembly languages. Loops and conditional branching were possible, and so the language as conceived would have been Turing-complete as later defined by Alan Turing. Three different types of punch cards were used: one for arithmetical operations, one for numerical constants, and one for load and store operations, transferring numbers from the store to the arithmetical unit or back. There were three separate readers for the three types of cards.
The machine was about a century ahead of its time. However, the project was slowed by various problems including disputes with the chief machinist building parts for it. All the parts for his machine had to be made by hand—this was a major problem for a machine with thousands of parts. Eventually, the project was dissolved with the decision of the British Government to cease funding. Babbage's failure to complete the analytical engine can be chiefly attributed to difficulties not only of politics and financing, but also to his desire to develop an increasingly sophisticated computer and to move ahead faster than anyone else could follow. Ada Lovelace translated and added notes to the "Sketch of the Analytical Engine" by Luigi Federico Menabrea. This appears to be the first published description of programming, so Ada Lovelace is widely regarded as the first computer programmer.
Following Babbage, although at first unaware of his earlier work, was Percy Ludgate, a clerk to a corn merchant in Dublin, Ireland. He independently designed a programmable mechanical computer, which he described in a work that was published in 1909. Two other inventors, Leonardo Torres y Quevedo and Vannevar Bush, also did follow on research based on Babbage's work. In his Essays on Automatics (1913) Torres y Quevedo designed a Babbage type of calculating machine that used electromechanical parts which included floating point number representations and built an early prototype in 1920. Bush's paper Instrumental Analysis (1936) discussed using existing IBM punch card machines to implement Babbage's design. In the same year he started the Rapid Arithmetical Machine project to investigate the problems of constructing an electronic digital computer.
Analog computers
In the first half of the 20th century, analog computers were considered by many to be the future of computing. These devices used the continuously changeable aspects of physical phenomena such as electrical, mechanical, or hydraulic quantities to model the problem being solved, in contrast to digital computers that represented varying quantities symbolically, as their numerical values change. As an analog computer does not use discrete values, but rather continuous values, processes cannot be reliably repeated with exact equivalence, as they can with Turing machines.
The first modern analog computer was a tide-predicting machine, invented by Sir William Thomson, later Lord Kelvin, in 1872. It used a system of pulleys and wires to automatically calculate predicted tide levels for a set period at a particular location and was of great utility to navigation in shallow waters. His device was the foundation for further developments in analog computing.
The differential analyser, a mechanical analog computer designed to solve differential equations by integration using wheel-and-disc mechanisms, was conceptualized in 1876 by James Thomson, the brother of the more famous Lord Kelvin. He explored the possible construction of such calculators, but was stymied by the limited output torque of the ball-and-disk integrators. In a differential analyzer, the output of one integrator drove the input of the next integrator, or a graphing output.
An important advance in analog computing was the development of the first fire-control systems for long range ship gunlaying. When gunnery ranges increased dramatically in the late 19th century it was no longer a simple matter of calculating the proper aim point, given the flight times of the shells. Various spotters on board the ship would relay distance measures and observations to a central plotting station. There the fire direction teams fed in the location, speed and direction of the ship and its target, as well as various adjustments for Coriolis effect, weather effects on the air, and other adjustments; the computer would then output a firing solution, which would be fed to the turrets for laying. In 1912, British engineer Arthur Pollen developed the first electrically powered mechanical analogue computer (called at the time the Argo Clock). It was used by the Imperial Russian Navy in World War I. The alternative Dreyer Table fire control system was fitted to British capital ships by mid-1916.
Mechanical devices were also used to aid the accuracy of aerial bombing. Drift Sight was the first such aid, developed by Harry Wimperis in 1916 for the Royal Naval Air Service; it measured the wind speed from the air, and used that measurement to calculate the wind's effects on the trajectory of the bombs. The system was later improved with the Course Setting Bomb Sight, and reached a climax with World War II bomb sights, Mark XIV bomb sight (RAF Bomber Command) and the Norden (United States Army Air Forces).
The art of mechanical analog computing reached its zenith with the differential analyzer, built by H. L. Hazen and Vannevar Bush at MIT starting in 1927, which built on the mechanical integrators of James Thomson and the torque amplifiers invented by H. W. Nieman. A dozen of these devices were built before their obsolescence became obvious; the most powerful was constructed at the University of Pennsylvania's Moore School of Electrical Engineering, where the ENIAC was built.
A fully electronic analog computer was built by Helmut Hölzer in 1942 at Peenemünde Army Research Center.
By the 1950s the success of digital electronic computers had spelled the end for most analog computing machines, but hybrid analog computers, controlled by digital electronics, remained in substantial use into the 1950s and 1960s, and later in some specialized applications.
Advent of the digital computer
The principle of the modern computer was first described by computer scientist Alan Turing, who set out the idea in his seminal 1936 paper, On Computable Numbers. Turing reformulated Kurt Gödel's 1931 results on the limits of proof and computation, replacing Gödel's universal arithmetic-based formal language with the formal and simple hypothetical devices that became known as Turing machines. He proved that some such machine would be capable of performing any conceivable mathematical computation if it were representable as an algorithm. He went on to prove that there was no solution to the Entscheidungsproblem by first showing that the halting problem for Turing machines is undecidable: in general, it is not possible to decide algorithmically whether a given Turing machine will ever halt.
He also introduced the notion of a "universal machine" (now known as a universal Turing machine), with the idea that such a machine could perform the tasks of any other machine, or in other words, it is provably capable of computing anything that is computable by executing a program stored on tape, allowing the machine to be programmable. Von Neumann acknowledged that the central concept of the modern computer was due to this paper. Turing machines are to this day a central object of study in theory of computation. Except for the limitations imposed by their finite memory stores, modern computers are said to be Turing-complete, which is to say, they have algorithm execution capability equivalent to a universal Turing machine.
Electromechanical computers
The era of modern computing began with a flurry of development before and during World War II. Most digital computers built in this period were electromechanical – electric switches drove mechanical relays to perform the calculation. These devices had a low operating speed and were eventually superseded by much faster all-electric computers, originally using vacuum tubes.
The Z2 was one of the earliest examples of an electromechanical relay computer, and was created by German engineer Konrad Zuse in 1940. It was an improvement on his earlier Z1; although it used the same mechanical memory, it replaced the arithmetic and control logic with electrical relay circuits.
In the same year, electro-mechanical devices called bombes were built by British cryptologists to help decipher German Enigma-machine-encrypted secret messages during World War II. The bombe's initial design was created in 1939 at the UK Government Code and Cypher School (GC&CS) at Bletchley Park by Alan Turing, with an important refinement devised in 1940 by Gordon Welchman. The engineering design and construction was the work of Harold Keen of the British Tabulating Machine Company. It was a substantial development from a device that had been designed in 1938 by Polish Cipher Bureau cryptologist Marian Rejewski, and known as the "cryptologic bomb" (Polish: "bomba kryptologiczna").
In 1941, Zuse followed his earlier machine up with the Z3, the world's first working electromechanical programmable, fully automatic digital computer. The Z3 was built with 2000 relays, implementing a 22-bit word length that operated at a clock frequency of about 5–10 Hz. Program code and data were stored on punched film. It was quite similar to modern machines in some respects, pioneering numerous advances such as floating point numbers. Replacement of the hard-to-implement decimal system (used in Charles Babbage's earlier design) by the simpler binary system meant that Zuse's machines were easier to build and potentially more reliable, given the technologies available at that time. The Z3 was proven to have been a Turing-complete machine in 1998 by Raúl Rojas. In two 1936 patent applications, Zuse also anticipated that machine instructions could be stored in the same storage used for data—the key insight of what became known as the von Neumann architecture, first implemented in 1948 in America in the electromechanical IBM SSEC and in Britain in the fully electronic Manchester Baby.
Zuse suffered setbacks during World War II when some of his machines were destroyed in the course of Allied bombing campaigns. Apparently his work remained largely unknown to engineers in the UK and US until much later, although at least IBM was aware of it as it financed his post-war startup company in 1946 in return for an option on Zuse's patents.
In 1944, the Harvard Mark I was constructed at IBM's Endicott laboratories. It was a similar general purpose electro-mechanical computer to the Z3, but was not quite Turing-complete.
Digital computation
The term digital was first suggested by George Robert Stibitz and refers to where a signal, such as a voltage, is not used to directly represent a value (as it would be in an analog computer), but to encode it. In November 1937, George Stibitz, then working at Bell Labs (1930–1941), completed a relay-based calculator he later dubbed the "Model K" (for "kitchen table", on which he had assembled it), which became the first binary adder. Typically signals have two states – low (usually representing 0) and high (usually representing 1), but sometimes three-valued logic is used, especially in high-density memory. Modern computers generally use binary logic, but many early machines were decimal computers. In these machines, the basic unit of data was the decimal digit, encoded in one of several schemes, including binary-coded decimal or BCD, bi-quinary, excess-3, and two-out-of-five code.
The mathematical basis of digital computing is Boolean algebra, developed by the British mathematician George Boole in his work The Laws of Thought, published in 1854. His Boolean algebra was further refined in the 1860s by William Jevons and Charles Sanders Peirce, and was first presented systematically by Ernst Schröder and A. N. Whitehead. In 1879 Gottlob Frege develops the formal approach to logic and proposes the first logic language for logical equations.
In the 1930s and working independently, American electronic engineer Claude Shannon and Soviet logician Victor Shestakov both showed a one-to-one correspondence between the concepts of Boolean logic and certain electrical circuits, now called logic gates, which are now ubiquitous in digital computers. They showed that electronic relays and switches can realize the expressions of Boolean algebra. This thesis essentially founded practical digital circuit design. In addition Shannon's paper gives a correct circuit diagram for a 4 bit digital binary adder.
Electronic data processing
Purely electronic circuit elements soon replaced their mechanical and electromechanical equivalents, at the same time that digital calculation replaced analog. Machines such as the Z3, the Atanasoff–Berry Computer, the Colossus computers, and the ENIAC were built by hand, using circuits containing relays or valves (vacuum tubes), and often used punched cards or punched paper tape for input and as the main (non-volatile) storage medium.
The engineer Tommy Flowers joined the telecommunications branch of the General Post Office in 1926. While working at the research station in Dollis Hill in the 1930s, he began to explore the possible use of electronics for the telephone exchange. Experimental equipment that he built in 1934 went into operation 5 years later, converting a portion of the telephone exchange network into an electronic data processing system, using thousands of vacuum tubes.
In the US, in 1940 Arthur Dickinson (IBM) invented the first digital electronic computer. This calculating device was fully electronic – control, calculations and output (the first electronic display). John Vincent Atanasoff and Clifford E. Berry of Iowa State University developed the Atanasoff–Berry Computer (ABC) in 1942, the first binary electronic digital calculating device. This design was semi-electronic (electro-mechanical control and electronic calculations), and used about 300 vacuum tubes, with capacitors fixed in a mechanically rotating drum for memory. However, its paper card writer/reader was unreliable and the regenerative drum contact system was mechanical. The machine's special-purpose nature and lack of changeable, stored program distinguish it from modern computers.
Computers whose logic was primarily built using vacuum tubes are now known as first generation computers.
The electronic programmable computer
During World War II, British codebreakers at Bletchley Park, north of London, achieved a number of successes at breaking encrypted enemy military communications. The German encryption machine, Enigma, was first attacked with the help of the electro-mechanical bombes. Women often operated these bombe machines. They ruled out possible Enigma settings by performing chains of logical deductions implemented electrically. Most possibilities led to a contradiction, and the few remaining could be tested by hand.
The Germans also developed a series of teleprinter encryption systems, quite different from Enigma. The Lorenz SZ 40/42 machine was used for high-level Army communications, code-named "Tunny" by the British. The first intercepts of Lorenz messages began in 1941. As part of an attack on Tunny, Max Newman and his colleagues developed the Heath Robinson, a fixed-function machine to aid in code breaking. Tommy Flowers, a senior engineer at the Post Office Research Station was recommended to Max Newman by Alan Turing and spent eleven months from early February 1943 designing and building the more flexible Colossus computer (which superseded the Heath Robinson). After a functional test in December 1943, Colossus was shipped to Bletchley Park, where it was delivered on 18 January 1944 and attacked its first message on 5 February.
Colossus was the world's first electronic digital programmable computer. It used a large number of valves (vacuum tubes). It had paper-tape input and was capable of being configured to perform a variety of boolean logical operations on its data, but it was not Turing-complete. Data input to Colossus was by photoelectric reading of a paper tape transcription of the enciphered intercepted message. This was arranged in a continuous loop so that it could be read and re-read multiple times – there being no internal store for the data. The reading mechanism ran at 5,000 characters per second with the paper tape moving at . Colossus Mark 1 contained 1500 thermionic valves (tubes), but Mark 2 with 2400 valves and five processors in parallel, was both 5 times faster and simpler to operate than Mark 1, greatly speeding the decoding process. Mark 2 was designed while Mark 1 was being constructed. Allen Coombs took over leadership of the Colossus Mark 2 project when Tommy Flowers moved on to other projects. The first Mark 2 Colossus became operational on 1 June 1944, just in time for the Allied Invasion of Normandy on D-Day.
Most of the use of Colossus was in determining the start positions of the Tunny rotors for a message, which was called "wheel setting". Colossus included the first-ever use of shift registers and systolic arrays, enabling five simultaneous tests, each involving up to 100 Boolean calculations. This enabled five different possible start positions to be examined for one transit of the paper tape. As well as wheel setting some later Colossi included mechanisms intended to help determine pin patterns known as "wheel breaking". Both models were programmable using switches and plug panels in a way their predecessors had not been. Ten Mk 2 Colossi were operational by the end of the war.
Without the use of these machines, the Allies would have been deprived of the very valuable intelligence that was obtained from reading the vast quantity of enciphered high-level telegraphic messages between the German High Command (OKW) and their army commands throughout occupied Europe. Details of their existence, design, and use were kept secret well into the 1970s. Winston Churchill personally issued an order for their destruction into pieces no larger than a man's hand, to keep secret that the British were capable of cracking Lorenz SZ cyphers (from German rotor stream cipher machines) during the oncoming Cold War. Two of the machines were transferred to the newly formed GCHQ and the others were destroyed. As a result, the machines were not included in many histories of computing. A reconstructed working copy of one of the Colossus machines is now on display at Bletchley Park.
The US-built ENIAC (Electronic Numerical Integrator and Computer) was the first electronic programmable computer built in the US. Although the ENIAC was similar to the Colossus it was much faster and more flexible. It was unambiguously a Turing-complete device and could compute any problem that would fit into its memory. Like the Colossus, a "program" on the ENIAC was defined by the states of its patch cables and switches, a far cry from the stored program electronic machines that came later. Once a program was written, it had to be mechanically set into the machine with manual resetting of plugs and switches. The programmers of the ENIAC were women who had been trained as mathematicians.
It combined the high speed of electronics with the ability to be programmed for many complex problems. It could add or subtract 5000 times a second, a thousand times faster than any other machine. It also had modules to multiply, divide, and square root. High-speed memory was limited to 20 words (equivalent to about 80 bytes). Built under the direction of John Mauchly and J. Presper Eckert at the University of Pennsylvania, ENIAC's development and construction lasted from 1943 to full operation at the end of 1945. The machine was huge, weighing 30 tons, using 200 kilowatts of electric power and contained over 18,000 vacuum tubes, 1,500 relays, and hundreds of thousands of resistors, capacitors, and inductors. One of its major engineering feats was to minimize the effects of tube burnout, which was a common problem in machine reliability at that time. The machine was in almost constant use for the next ten years.
Stored-program computer
Early computing machines were programmable in the sense that they could follow the sequence of steps they had been set up to execute, but the "program", or steps that the machine was to execute, were set up usually by changing how the wires were plugged into a patch panel or plugboard. "Reprogramming", when it was possible at all, was a laborious process, starting with engineers working out flowcharts, designing the new set up, and then the often-exacting process of physically re-wiring patch panels. Stored-program computers, by contrast, were designed to store a set of instructions (a program), in memory – typically the same memory as stored data.
Theory
The theoretical basis for the stored-program computer had been proposed by Alan Turing in his 1936 paper. In 1945 Turing joined the National Physical Laboratory and began his work on developing an electronic stored-program digital computer. His 1945 report 'Proposed Electronic Calculator' was the first specification for such a device.
Meanwhile, John von Neumann at the Moore School of Electrical Engineering, University of Pennsylvania, circulated his First Draft of a Report on the EDVAC in 1945. Although substantially similar to Turing's design and containing comparatively little engineering detail, the computer architecture it outlined became known as the "von Neumann architecture". Turing presented a more detailed paper to the National Physical Laboratory (NPL) Executive Committee in 1946, giving the first reasonably complete design of a stored-program computer, a device he called the Automatic Computing Engine (ACE). However, the better-known EDVAC design of John von Neumann, who knew of Turing's theoretical work, received more publicity, despite its incomplete nature and questionable lack of attribution of the sources of some of the ideas.
Turing thought that the speed and the size of computer memory were crucial elements, so he proposed a high-speed memory of what would today be called 25 KB, accessed at a speed of 1 MHz. The ACE implemented subroutine calls, whereas the EDVAC did not, and the ACE also used Abbreviated Computer Instructions, an early form of programming language.
Manchester Baby
The Manchester Baby was the world's first electronic stored-program computer. It was built at the Victoria University of Manchester by Frederic C. Williams, Tom Kilburn and Geoff Tootill, and ran its first program on 21 June 1948.
The machine was not intended to be a practical computer but was instead designed as a testbed for the Williams tube, the first random-access digital storage device. Invented by Freddie Williams and Tom Kilburn at the University of Manchester in 1946 and 1947, it was a cathode-ray tube that used an effect called secondary emission to temporarily store electronic binary data, and was used successfully in several early computers.
Although the computer was small and primitive, it was a proof of concept for solving a single problem; Baby was the first working machine to contain all of the elements essential to a modern electronic computer. As soon as the Baby had demonstrated the feasibility of its design, a project was initiated at the university to develop the design into a more usable computer, the Manchester Mark 1. The Mark 1 in turn quickly became the prototype for the Ferranti Mark 1, the world's first commercially available general-purpose computer.
The Baby had a 32-bit word length and a memory of 32 words. As it was designed to be the simplest possible stored-program computer, the only arithmetic operations implemented in hardware were subtraction and negation; other arithmetic operations were implemented in software. The first of three programs written for the machine found the highest proper divisor of 218 (262,144), a calculation that was known would take a long time to run—and so prove the computer's reliability—by testing every integer from 218 − 1 downwards, as division was implemented by repeated subtraction of the divisor. The program consisted of 17 instructions and ran for 52 minutes before reaching the correct answer of 131,072, after the Baby had performed 3.5 million operations (for an effective CPU speed of 1.1 kIPS). The successive approximations to the answer were displayed as the successive positions of a bright dot on the Williams tube.
Manchester Mark 1
The Experimental machine led on to the development of the Manchester Mark 1 at the University of Manchester. Work began in August 1948, and the first version was operational by April 1949; a program written to search for Mersenne primes ran error-free for nine hours on the night of 16/17 June 1949.
The machine's successful operation was widely reported in the British press, which used the phrase "electronic brain" in describing it to their readers.
The computer is especially historically significant because of its pioneering inclusion of index registers, an innovation which made it easier for a program to read sequentially through an array of words in memory. Thirty-four patents resulted from the machine's development, and many of the ideas behind its design were incorporated in subsequent commercial products such as the and 702 as well as the Ferranti Mark 1. The chief designers, Frederic C. Williams and Tom Kilburn, concluded from their experiences with the Mark 1 that computers would be used more in scientific roles than in pure mathematics. In 1951 they started development work on Meg, the Mark 1's successor, which would include a floating-point unit.
EDSAC
The other contender for being the first recognizably modern digital stored-program computer was the EDSAC, designed and constructed by Maurice Wilkes and his team at the University of Cambridge Mathematical Laboratory in England at the University of Cambridge in 1949. The machine was inspired by John von Neumann's seminal First Draft of a Report on the EDVAC and was one of the first usefully operational electronic digital stored-program computer.
EDSAC ran its first programs on 6 May 1949, when it calculated a table of squares and a list of prime numbers.The EDSAC also served as the basis for the first commercially applied computer, the LEO I, used by food manufacturing company J. Lyons & Co. Ltd. EDSAC 1 was finally shut down on 11 July 1958, having been superseded by EDSAC 2 which stayed in use until 1965.
EDVAC
ENIAC inventors John Mauchly and J. Presper Eckert proposed the EDVAC's construction in August 1944, and design work for the EDVAC commenced at the University of Pennsylvania's Moore School of Electrical Engineering, before the ENIAC was fully operational. The design implemented a number of important architectural and logical improvements conceived during the ENIAC's construction, and a high-speed serial-access memory. However, Eckert and Mauchly left the project and its construction floundered.
It was finally delivered to the U.S. Army's Ballistics Research Laboratory at the Aberdeen Proving Ground in August 1949, but due to a number of problems, the computer only began operation in 1951, and then only on a limited basis.
Commercial computers
The first commercial computer was the Ferranti Mark 1, built by Ferranti and delivered to the University of Manchester in February 1951. It was based on the Manchester Mark 1. The main improvements over the Manchester Mark 1 were in the size of the primary storage (using random access Williams tubes), secondary storage (using a magnetic drum), a faster multiplier, and additional instructions. The basic cycle time was 1.2 milliseconds, and a multiplication could be completed in about 2.16 milliseconds. The multiplier used almost a quarter of the machine's 4,050 vacuum tubes (valves). A second machine was purchased by the University of Toronto, before the design was revised into the Mark 1 Star. At least seven of these later machines were delivered between 1953 and 1957, one of them to Shell labs in Amsterdam.
In October 1947, the directors of J. Lyons & Company, a British catering company famous for its teashops but with strong interests in new office management techniques, decided to take an active role in promoting the commercial development of computers. The LEO I computer (Lyons Electronic Office) became operational in April 1951 and ran the world's first regular routine office computer job. On 17 November 1951, the J. Lyons company began weekly operation of a bakery valuations job on the LEO – the first business application to go live on a stored program computer.
In June 1951, the UNIVAC I (Universal Automatic Computer) was delivered to the U.S. Census Bureau. Remington Rand eventually sold 46 machines at more than US$1 million each ($ as of ). UNIVAC was the first "mass produced" computer. It used 5,200 vacuum tubes and consumed 125 kW of power. Its primary storage was serial-access mercury delay lines capable of storing 1,000 words of 11 decimal digits plus sign (72-bit words).
IBM introduced a smaller, more affordable computer in 1954 that proved very popular. The IBM 650 weighed over 900 kg, the attached power supply weighed around 1350 kg and both were held in separate cabinets of roughly 1.5 meters by 0.9 meters by 1.8 meters. The system cost US$500,000 ($ as of ) or could be leased for US$3,500 a month ($ as of ). Its drum memory was originally 2,000 ten-digit words, later expanded to 4,000 words. Memory limitations such as this were to dominate programming for decades afterward. The program instructions were fetched from the spinning drum as the code ran. Efficient execution using drum memory was provided by a combination of hardware architecture – the instruction format included the address of the next instruction – and software: the Symbolic Optimal Assembly Program, SOAP, assigned instructions to the optimal addresses (to the extent possible by static analysis of the source program). Thus many instructions were, when needed, located in the next row of the drum to be read and additional wait time for drum rotation was reduced.
Microprogramming
In 1951, British scientist Maurice Wilkes developed the concept of microprogramming from the realisation that the central processing unit of a computer could be controlled by a miniature, highly specialised computer program in high-speed ROM. Microprogramming allows the base instruction set to be defined or extended by built-in programs (now called firmware or microcode). This concept greatly simplified CPU development. He first described this at the University of Manchester Computer Inaugural Conference in 1951, then published in expanded form in IEEE Spectrum in 1955.
It was widely used in the CPUs and floating-point units of mainframe and other computers; it was implemented for the first time in EDSAC 2, which also used multiple identical "bit slices" to simplify design. Interchangeable, replaceable tube assemblies were used for each bit of the processor.
Magnetic memory
Magnetic drum memories were developed for the US Navy during WW II with the work continuing at Engineering Research Associates (ERA) in 1946 and 1947. ERA, then a part of Univac included a drum memory in its 1103, announced in February 1953. The first mass-produced computer, the IBM 650, also announced in 1953 had about 8.5 kilobytes of drum memory.
Magnetic core memory patented in 1949 with its first usage demonstrated for the Whirlwind computer in August 1953. Commercialization followed quickly. Magnetic core was used in peripherals of the IBM 702 delivered in July 1955, and later in the 702 itself. The IBM 704 (1955) and the Ferranti Mercury (1957) used magnetic-core memory. It went on to dominate the field into the 1970s, when it was replaced with semiconductor memory. Magnetic core peaked in volume about 1975 and declined in usage and market share thereafter.
As late as 1980, PDP-11/45 machines using magnetic-core main memory and drums for swapping were still in use at many of the original UNIX sites.
Early digital computer characteristics
Transistor computers
The bipolar transistor was invented in 1947. From 1955 onward transistors replaced vacuum tubes in computer designs, giving rise to the "second generation" of computers. Compared to vacuum tubes, transistors have many advantages: they are smaller, and require less power than vacuum tubes, so give off less heat. Silicon junction transistors were much more reliable than vacuum tubes and had longer service life. Transistorized computers could contain tens of thousands of binary logic circuits in a relatively compact space. Transistors greatly reduced computers' size, initial cost, and operating cost. Typically, second-generation computers were composed of large numbers of printed circuit boards such as the IBM Standard Modular System, each carrying one to four logic gates or flip-flops.
At the University of Manchester, a team under the leadership of Tom Kilburn designed and built a machine using the newly developed transistors instead of valves. Initially the only devices available were germanium point-contact transistors, less reliable than the valves they replaced but which consumed far less power. Their first transistorised computer, and the first in the world, was operational by 1953, and a second version was completed there in April 1955. The 1955 version used 200 transistors, 1,300 solid-state diodes, and had a power consumption of 150 watts. However, the machine did make use of valves to generate its 125 kHz clock waveforms and in the circuitry to read and write on its magnetic drum memory, so it was not the first completely transistorized computer.
That distinction goes to the Harwell CADET of 1955, built by the electronics division of the Atomic Energy Research Establishment at Harwell. The design featured a 64-kilobyte magnetic drum memory store with multiple moving heads that had been designed at the National Physical Laboratory, UK. By 1953 this team had transistor circuits operating to read and write on a smaller magnetic drum from the Royal Radar Establishment. The machine used a low clock speed of only 58 kHz to avoid having to use any valves to generate the clock waveforms.
CADET used 324-point-contact transistors provided by the UK company Standard Telephones and Cables; 76 junction transistors were used for the first stage amplifiers for data read from the drum, since point-contact transistors were too noisy. From August 1956 CADET was offering a regular computing service, during which it often executed continuous computing runs of 80 hours or more. Problems with the reliability of early batches of point contact and alloyed junction transistors meant that the machine's mean time between failures was about 90 minutes, but this improved once the more reliable bipolar junction transistors became available.
The Manchester University Transistor Computer's design was adopted by the local engineering firm of Metropolitan-Vickers in their Metrovick 950, the first commercial transistor computer anywhere. Six Metrovick 950s were built, the first completed in 1956. They were successfully deployed within various departments of the company and were in use for about five years. A second generation computer, the IBM 1401, captured about one third of the world market. IBM installed more than ten thousand 1401s between 1960 and 1964.
Transistor peripherals
Transistorized electronics improved not only the CPU (Central Processing Unit), but also the peripheral devices. The second generation disk data storage units were able to store tens of millions of letters and digits. Next to the fixed disk storage units, connected to the CPU via high-speed data transmission, were removable disk data storage units. A removable disk pack can be easily exchanged with another pack in a few seconds. Even if the removable disks' capacity is smaller than fixed disks, their interchangeability guarantees a nearly unlimited quantity of data close at hand. Magnetic tape provided archival capability for this data, at a lower cost than disk.
Many second-generation CPUs delegated peripheral device communications to a secondary processor. For example, while the communication processor controlled card reading and punching, the main CPU executed calculations and binary branch instructions. One databus would bear data between the main CPU and core memory at the CPU's fetch-execute cycle rate, and other databusses would typically serve the peripheral devices. On the PDP-1, the core memory's cycle time was 5 microseconds; consequently most arithmetic instructions took 10 microseconds (100,000 operations per second) because most operations took at least two memory cycles; one for the instruction, one for the operand data fetch.
During the second generation remote terminal units (often in the form of Teleprinters like a Friden Flexowriter) saw greatly increased use. Telephone connections provided sufficient speed for early remote terminals and allowed hundreds of kilometers separation between remote-terminals and the computing center. Eventually these stand-alone computer networks would be generalized into an interconnected network of networks—the Internet.
Transistor supercomputers
The early 1960s saw the advent of supercomputing. The Atlas was a joint development between the University of Manchester, Ferranti, and Plessey, and was first installed at Manchester University and officially commissioned in 1962 as one of the world's first supercomputers – considered to be the most powerful computer in the world at that time. It was said that whenever Atlas went offline half of the United Kingdom's computer capacity was lost. It was a second-generation machine, using discrete germanium transistors. Atlas also pioneered the Atlas Supervisor, "considered by many to be the first recognisable modern operating system".
In the US, a series of computers at Control Data Corporation (CDC) were designed by Seymour Cray to use innovative designs and parallelism to achieve superior computational peak performance. The CDC 6600, released in 1964, is generally considered the first supercomputer. The CDC 6600 outperformed its predecessor, the IBM 7030 Stretch, by about a factor of 3. With performance of about 1 megaFLOPS, the CDC 6600 was the world's fastest computer from 1964 to 1969, when it relinquished that status to its successor, the CDC 7600.
Integrated circuit computers
The "third-generation" of digital electronic computers used integrated circuit (IC) chips as the basis of their logic.
The idea of an integrated circuit was conceived by a radar scientist working for the Royal Radar Establishment of the Ministry of Defence, Geoffrey W.A. Dummer.
The first working integrated circuits were invented by Jack Kilby at Texas Instruments and Robert Noyce at Fairchild Semiconductor. Kilby recorded his initial ideas concerning the integrated circuit in July 1958, successfully demonstrating the first working integrated example on 12 September 1958. Kilby's invention was a hybrid integrated circuit (hybrid IC). It had external wire connections, which made it difficult to mass-produce.
Noyce came up with his own idea of an integrated circuit half a year after Kilby. Noyce's invention was a monolithic integrated circuit (IC) chip. His chip solved many practical problems that Kilby's had not. Produced at Fairchild Semiconductor, it was made of silicon, whereas Kilby's chip was made of germanium. The basis for Noyce's monolithic IC was Fairchild's planar process, which allowed integrated circuits to be laid out using the same principles as those of printed circuits. The planar process was developed by Noyce's colleague Jean Hoerni in early 1959, based on Mohamed M. Atalla's work on semiconductor surface passivation by silicon dioxide at Bell Labs in the late 1950s.
Third generation (integrated circuit) computers first appeared in the early 1960s in computers developed for government purposes, and then in commercial computers beginning in the mid-1960s. The first silicon IC computer was the Apollo Guidance Computer or AGC. Although not the most powerful computer of its time, the extreme constraints on size, mass, and power of the Apollo spacecraft required the AGC to be much smaller and denser than any prior computer, weighing in at only . Each lunar landing mission carried two AGCs, one each in the command and lunar ascent modules.
Semiconductor memory
The MOSFET (metal-oxide-semiconductor field-effect transistor, or MOS transistor) was invented by Mohamed M. Atalla and Dawon Kahng at Bell Labs in 1959. In addition to data processing, the MOSFET enabled the practical use of MOS transistors as memory cell storage elements, a function previously served by magnetic cores. Semiconductor memory, also known as MOS memory, was cheaper and consumed less power than magnetic-core memory. MOS random-access memory (RAM), in the form of static RAM (SRAM), was developed by John Schmidt at Fairchild Semiconductor in 1964. In 1966, Robert Dennard at the IBM Thomas J. Watson Research Center developed MOS dynamic RAM (DRAM). In 1967, Dawon Kahng and Simon Sze at Bell Labs developed the floating-gate MOSFET, the basis for MOS non-volatile memory such as EPROM, EEPROM and flash memory.
Microprocessor computers
The "fourth-generation" of digital electronic computers used microprocessors as the basis of their logic. The microprocessor has origins in the MOS integrated circuit (MOS IC) chip. Due to rapid MOSFET scaling, MOS IC chips rapidly increased in complexity at a rate predicted by Moore's law, leading to large-scale integration (LSI) with hundreds of transistors on a single MOS chip by the late 1960s. The application of MOS LSI chips to computing was the basis for the first microprocessors, as engineers began recognizing that a complete computer processor could be contained on a single MOS LSI chip.
The subject of exactly which device was the first microprocessor is contentious, partly due to lack of agreement on the exact definition of the term "microprocessor". The earliest multi-chip microprocessors were the Four-Phase Systems AL-1 in 1969 and Garrett AiResearch MP944 in 1970, developed with multiple MOS LSI chips. The first single-chip microprocessor was the Intel 4004, developed on a single PMOS LSI chip. It was designed and realized by Ted Hoff, Federico Faggin, Masatoshi Shima and Stanley Mazor at Intel, and released in 1971. Tadashi Sasaki and Masatoshi Shima at Busicom, a calculator manufacturer, had the initial insight that the CPU could be a single MOS LSI chip, supplied by Intel.
While the earliest microprocessor ICs literally contained only the processor, i.e. the central processing unit, of a computer, their progressive development naturally led to chips containing most or all of the internal electronic parts of a computer. The integrated circuit in the image on the right, for example, an Intel 8742, is an 8-bit microcontroller that includes a CPU running at 12 MHz, 128 bytes of RAM, 2048 bytes of EPROM, and I/O in the same chip.
During the 1960s there was considerable overlap between second and third generation technologies. IBM implemented its IBM Solid Logic Technology modules in hybrid circuits for the IBM System/360 in 1964. As late as 1975, Sperry Univac continued the manufacture of second-generation machines such as the UNIVAC 494. The Burroughs large systems such as the B5000 were stack machines, which allowed for simpler programming. These pushdown automatons were also implemented in minicomputers and microprocessors later, which influenced programming language design. Minicomputers served as low-cost computer centers for industry, business and universities. It became possible to simulate analog circuits with the simulation program with integrated circuit emphasis, or SPICE (1971) on minicomputers, one of the programs for electronic design automation (EDA). The microprocessor led to the development of microcomputers, small, low-cost computers that could be owned by individuals and small businesses. Microcomputers, the first of which appeared in the 1970s, became ubiquitous in the 1980s and beyond.
While which specific system is considered the first microcomputer is a matter of debate, as there were several unique hobbyist systems developed based on the Intel 4004 and its successor, the Intel 8008, the first commercially available microcomputer kit was the Intel 8080-based Altair 8800, which was announced in the January 1975 cover article of Popular Electronics. However, this was an extremely limited system in its initial stages, having only 256 bytes of DRAM in its initial package and no input-output except its toggle switches and LED register display. Despite this, it was initially surprisingly popular, with several hundred sales in the first year, and demand rapidly outstripped supply. Several early third-party vendors such as Cromemco and Processor Technology soon began supplying additional S-100 bus hardware for the Altair 8800.
In April 1975 at the Hannover Fair, Olivetti presented the P6060, the world's first complete, pre-assembled personal computer system. The central processing unit consisted of two cards, code named PUCE1 and PUCE2, and unlike most other personal computers was built with TTL components rather than a microprocessor. It had one or two 8" floppy disk drives, a 32-character plasma display, 80-column graphical thermal printer, 48 Kbytes of RAM, and BASIC language. It weighed . As a complete system, this was a significant step from the Altair, though it never achieved the same success. It was in competition with a similar product by IBM that had an external floppy disk drive.
From 1975 to 1977, most microcomputers, such as the MOS Technology KIM-1, the Altair 8800, and some versions of the Apple I, were sold as kits for do-it-yourselfers. Pre-assembled systems did not gain much ground until 1977, with the introduction of the Apple II, the Tandy TRS-80, the first SWTPC computers, and the Commodore PET. Computing has evolved with microcomputer architectures, with features added from their larger brethren, now dominant in most market segments.
A NeXT Computer and its object-oriented development tools and libraries were used by Tim Berners-Lee and Robert Cailliau at CERN to develop the world's first web server software, CERN httpd, and also used to write the first web browser, WorldWideWeb.
Systems as complicated as computers require very high reliability. ENIAC remained on, in continuous operation from 1947 to 1955, for eight years before being shut down. Although a vacuum tube might fail, it would be replaced without bringing down the system. By the simple strategy of never shutting down ENIAC, the failures were dramatically reduced. The vacuum-tube SAGE air-defense computers became remarkably reliable – installed in pairs, one off-line, tubes likely to fail did so when the computer was intentionally run at reduced power to find them. Hot-pluggable hard disks, like the hot-pluggable vacuum tubes of yesteryear, continue the tradition of repair during continuous operation. Semiconductor memories routinely have no errors when they operate, although operating systems like Unix have employed memory tests on start-up to detect failing hardware. Today, the requirement of reliable performance is made even more stringent when server farms are the delivery platform. Google has managed this by using fault-tolerant software to recover from hardware failures, and is even working on the concept of replacing entire server farms on-the-fly, during a service event.
In the 21st century, multi-core CPUs became commercially available. Content-addressable memory (CAM) has become inexpensive enough to be used in networking, and is frequently used for on-chip cache memory in modern microprocessors, although no computer system has yet implemented hardware CAMs for use in programming languages. Currently, CAMs (or associative arrays) in software are programming-language-specific. Semiconductor memory cell arrays are very regular structures, and manufacturers prove their processes on them; this allows price reductions on memory products. During the 1980s, CMOS logic gates developed into devices that could be made as fast as other circuit types; computer power consumption could therefore be decreased dramatically. Unlike the continuous current draw of a gate based on other logic types, a CMOS gate only draws significant current during the 'transition' between logic states, except for leakage.
CMOS circuits have allowed computing to become a commodity which is now ubiquitous, embedded in many forms, from greeting cards and telephones to satellites. The thermal design power which is dissipated during operation has become as essential as computing speed of operation. In 2006 servers consumed 1.5% of the total energy budget of the U.S. The energy consumption of computer data centers was expected to double to 3% of world consumption by 2011. The SoC (system on a chip) has compressed even more of the integrated circuitry into a single chip; SoCs are enabling phones and PCs to converge into single hand-held wireless mobile devices.
Quantum computing is an emerging technology in the field of computing. MIT Technology Review reported 10 November 2017 that IBM has created a 50-qubit computer; currently its quantum state lasts 50 microseconds. Google researchers have been able to extend the 50 microsecond time limit, as reported 14 July 2021 in Nature; stability has been extended 100-fold by spreading a single logical qubit over chains of data qubits for quantum error correction. Physical Review X reported a technique for 'single-gate sensing as a viable readout method for spin qubits' (a singlet-triplet spin state in silicon) on 26 November 2018. A Google team has succeeded in operating their RF pulse modulator chip at 3 Kelvin, simplifying the cryogenics of their 72-qubit computer, which is setup to operate at 0.3 Kelvin; but the readout circuitry and another driver remain to be brought into the cryogenics. See: Quantum supremacy Silicon qubit systems have demonstrated entanglement at non-local distances.
Computing hardware and its software have even become a metaphor for the operation of the universe.
Epilogue
An indication of the rapidity of development of this field can be inferred from the history of the seminal 1947 article by Burks, Goldstine and von Neumann. By the time that anyone had time to write anything down, it was obsolete. After 1945, others read John von Neumann's First Draft of a Report on the EDVAC, and immediately started implementing their own systems. To this day, the rapid pace of development has continued, worldwide.
A 1966 article in Time predicted that: "By 2000, the machines will be producing so much that everyone in the U.S. will, in effect, be independently wealthy. How to use leisure time will be a major problem."
See also
Antikythera mechanism
History of computing
History of computing hardware (1960s–present)
History of laptops
History of personal computers
History of software
Information Age
IT History Society
Timeline of computing
List of pioneers in computer science
Vacuum-tube computer
Notes
References
With notes upon the Memoir by the Translator.
German to English translation, M.I.T., 1969.
Noyce, Robert
Pages 220–226 are annotated references and guide for further reading.
Stibitz, George
(and ) Other online versions: Proceedings of the London Mathematical Society Another link online.
Wang, An
Further reading
Computers and Automation Magazine – Pictorial Report on the Computer Field:
A PICTORIAL INTRODUCTION TO COMPUTERS – 06/1957
A PICTORIAL MANUAL ON COMPUTERS – 12/1957
A PICTORIAL MANUAL ON COMPUTERS, Part 2 – 01/1958
1958–1967 Pictorial Report on the Computer Field – December issues (195812.pdf, ..., 196712.pdf)
Bit by Bit: An Illustrated History of Computers, Stan Augarten, 1984. OCR with permission of the author
External links
Obsolete Technology – Old Computers
History of calculating technology
Historic Computers in Japan
The History of Japanese Mechanical Calculating Machines
Computer History — a collection of articles by Bob Bemer
25 Microchips that shook the world – a collection of articles by the Institute of Electrical and Electronics Engineers
Columbia University Computing History
Computer Histories – An introductory course on the history of computing
Revolution – The First 2000 Years Of Computing, Computer History Museum
01
History of computing |
13777 | https://en.wikipedia.org/wiki/Hard%20disk%20drive | Hard disk drive | A hard disk drive (HDD), hard disk, hard drive, or fixed disk is an electro-mechanical data storage device that stores and retrieves digital data using magnetic storage and one or more rigid rapidly rotating platters coated with magnetic material. The platters are paired with magnetic heads, usually arranged on a moving actuator arm, which read and write data to the platter surfaces. Data is accessed in a random-access manner, meaning that individual blocks of data can be stored and retrieved in any order. HDDs are a type of non-volatile storage, retaining stored data even when powered off. Modern HDDs are typically in the form of a small rectangular box.
Introduced by IBM in 1956, HDDs were the dominant secondary storage device for general-purpose computers beginning in the early 1960s. HDDs maintained this position into the modern era of servers and personal computers, though personal computing devices produced in large volume, like cell phones and tablets, rely on flash memory storage devices. More than 224 companies have produced HDDs historically, though after extensive industry consolidation most units are manufactured by Seagate, Toshiba, and Western Digital. HDDs dominate the volume of storage produced (exabytes per year) for servers. Though production is growing slowly (by exabytes shipped), sales revenues and unit shipments are declining because solid-state drives (SSDs) have higher data-transfer rates, higher areal storage density, somewhat better reliability, and much lower latency and access times.
The revenues for SSDs, most of which use NAND flash memory, slightly exceed those for HDDs. Flash storage products had more than twice the revenue of hard disk drives . Though SSDs have four to nine times higher cost per bit, they are replacing HDDs in applications where speed, power consumption, small size, high capacity and durability are important. Cost per bit for SSDs is falling, and the price premium over HDDs has narrowed.
The primary characteristics of an HDD are its capacity and performance. Capacity is specified in unit prefixes corresponding to powers of : a 1-terabyte (TB) drive has a capacity of gigabytes (GB; where 1 gigabyte = (109) bytes). Typically, some of an HDD's capacity is unavailable to the user because it is used by the file system and the computer operating system, and possibly inbuilt redundancy for error correction and recovery. Also there is confusion regarding storage capacity, since capacities are stated in decimal gigabytes (powers of 1000) by HDD manufacturers, whereas the most commonly used operating systems report capacities in powers of 1024, which results in a smaller number than advertised. Performance is specified by the time required to move the heads to a track or cylinder (average access time) adding the time it takes for the desired sector to move under the head (average latency, which is a function of the physical rotational speed in revolutions per minute), and finally the speed at which the data is transmitted (data rate).
The two most common form factors for modern HDDs are 3.5-inch, for desktop computers, and 2.5-inch, primarily for laptops. HDDs are connected to systems by standard interface cables such as PATA (Parallel ATA), SATA (Serial ATA), USB or SAS (Serial Attached SCSI) cables.
History
The first production IBM hard disk drive, the 350 disk storage, shipped in 1957 as a component of the IBM 305 RAMAC system. It was approximately the size of two medium-sized refrigerators and stored five million six-bit characters (3.75 megabytes) on a stack of 52 disks (100 surfaces used). The 350 had a single arm with two read/write heads, one facing up and the other down, that moved both horizontally between a pair of adjacent platters and vertically from one pair of platters to a second set. Variants of the IBM 350 were the IBM 355, IBM 7300 and IBM 1405.
In 1961 IBM announced, and in 1962 shipped, the IBM 1301 disk storage unit, which superseded
the IBM 350 and similar drives. The 1301 consisted of one (for Model 1) or two (for model 2) modules, each containing 25 platters, each platter about thick and in diameter. While the earlier IBM disk drives used only two read/write heads per arm, the 1301 used an array of 48 heads (comb), each array moving horizontally as a single unit, one head per surface used. Cylinder-mode read/write operations were supported, and the heads flew about 250 micro-inches (about 6 µm) above the platter surface. Motion of the head array depended upon a binary adder system of hydraulic actuators which assured repeatable positioning. The 1301 cabinet was about the size of three home refrigerators placed side by side, storing the equivalent of about 21 million eight-bit bytes per module. Access time was about a quarter of a second.
Also in 1962, IBM introduced the model 1311 disk drive, which was about the size of a washing machine and stored two million characters on a removable disk pack. Users could buy additional packs and interchange them as needed, much like reels of magnetic tape. Later models of removable pack drives, from IBM and others, became the norm in most computer installations and reached capacities of 300 megabytes by the early 1980s. Non-removable HDDs were called "fixed disk" drives.
In 1963 IBM introduced the 1302, with twice the track capacity and twice as many tracks per cylinder as the 1301. The 1302 had one (for Model 1) or two (for Model 2) modules, each containing a separate comb for the first 250 tracks and the last 250 tracks.
Some high-performance HDDs were manufactured with one head per track, e.g., Burroughs B-475 in 1964, IBM 2305 in 1970, so that no time was lost physically moving the heads to a track and the only latency was the time for the desired block of data to rotate into position under the head. Known as fixed-head or head-per-track disk drives, they were very expensive and are no longer in production.
In 1973, IBM introduced a new type of HDD code-named "Winchester". Its primary distinguishing feature was that the disk heads were not withdrawn completely from the stack of disk platters when the drive was powered down. Instead, the heads were allowed to "land" on a special area of the disk surface upon spin-down, "taking off" again when the disk was later powered on. This greatly reduced the cost of the head actuator mechanism, but precluded removing just the disks from the drive as was done with the disk packs of the day. Instead, the first models of "Winchester technology" drives featured a removable disk module, which included both the disk pack and the head assembly, leaving the actuator motor in the drive upon removal. Later "Winchester" drives abandoned the removable media concept and returned to non-removable platters.
In 1974 IBM introduced the swinging arm actuator, made feasible because the Winchester recording heads function well when skewed to the recorded tracks. The simple design of the IBM GV (Gulliver) drive, invented at IBM's UK Hursley Labs, became IBM's most licensed electro-mechanical invention of all time, the actuator and filtration system being adopted in the 1980s eventually for all HDDs, and still universal nearly 40 years and 10 Billion arms later.
Like the first removable pack drive, the first "Winchester" drives used platters in diameter. In 1978 IBM introduced a swing arm drive, the IBM 0680 (Piccolo), with eight inch platters, exploring the possibility that smaller platters might offer advantages. Other eight inch drives followed, then drives, sized to replace the contemporary floppy disk drives. The latter were primarily intended for the then fledgling personal computer (PC) market.
Over time, as recording densities were greatly increased, further reductions in disk diameter to 3.5" and 2.5" were found to be optimum. Powerful rare earth magnet materials became affordable during this period, and were complementary to the swing arm actuator design to make possible the compact form factors of modern HDDs.
As the 1980s began, HDDs were a rare and very expensive additional feature in PCs, but by the late 1980s their cost had been reduced to the point where they were standard on all but the cheapest computers.
Most HDDs in the early 1980s were sold to PC end users as an external, add-on subsystem. The subsystem was not sold under the drive manufacturer's name but under the subsystem manufacturer's name such as Corvus Systems and Tallgrass Technologies, or under the PC system manufacturer's name such as the Apple ProFile. The IBM PC/XT in 1983 included an internal 10 MB HDD, and soon thereafter internal HDDs proliferated on personal computers.
External HDDs remained popular for much longer on the Apple Macintosh. Many Macintosh computers made between 1986 and 1998 featured a SCSI port on the back, making external expansion simple. Older compact Macintosh computers did not have user-accessible hard drive bays (indeed, the Macintosh 128K, Macintosh 512K, and Macintosh Plus did not feature a hard drive bay at all), so on those models external SCSI disks were the only reasonable option for expanding upon any internal storage.
HDD improvements have been driven by increasing areal density, listed in the table above. Applications expanded through the 2000s, from the mainframe computers of the late 1950s to most mass storage applications including computers and consumer applications such as storage of entertainment content.
In the 2000s and 2010s, NAND began supplanting HDDs in applications requiring portability or high performance. NAND performance is improving faster than HDDs, and applications for HDDs are eroding. In 2018, the largest hard drive had a capacity of 15 TB, while the largest capacity SSD had a capacity of 100 TB. , HDDs were forecast to reach 100 TB capacities around 2025, but the expected pace of improvement was pared back to 50 TB by 2026. Smaller form factors, 1.8-inches and below, were discontinued around 2010. The cost of solid-state storage (NAND), represented by Moore's law, is improving faster than HDDs. NAND has a higher price elasticity of demand than HDDs, and this drives market growth. During the late 2000s and 2010s, the product life cycle of HDDs entered a mature phase, and slowing sales may indicate the onset of the declining phase.
The 2011 Thailand floods damaged the manufacturing plants and impacted hard disk drive cost adversely between 2011 and 2013.
In 2019, Western Digital closed its last Malaysian HDD factory due to decreasing demand, to focus on SSD production. All three remaining HDD manufacturers have had decreasing demand for their HDDs since 2014.
Technology
Magnetic recording
A modern HDD records data by magnetizing a thin film of ferromagnetic material on both sides of a disk. Sequential changes in the direction of magnetization represent binary data bits. The data is read from the disk by detecting the transitions in magnetization. User data is encoded using an encoding scheme, such as run-length limited encoding, which determines how the data is represented by the magnetic transitions.
A typical HDD design consists of a that holds flat circular disks, called platters, which hold the recorded data. The platters are made from a non-magnetic material, usually aluminum alloy, glass, or ceramic. They are coated with a shallow layer of magnetic material typically 10–20 nm in depth, with an outer layer of carbon for protection. For reference, a standard piece of copy paper is thick.
The platters in contemporary HDDs are spun at speeds varying from 4,200 RPM in energy-efficient portable devices, to 15,000 rpm for high-performance servers. The first HDDs spun at 1,200 rpm and, for many years, 3,600 rpm was the norm. , the platters in most consumer-grade HDDs spin at 5,400 or 7,200 RPM.
Information is written to and read from a platter as it rotates past devices called read-and-write heads that are positioned to operate very close to the magnetic surface, with their flying height often in the range of tens of nanometers. The read-and-write head is used to detect and modify the magnetization of the material passing immediately under it.
In modern drives, there is one head for each magnetic platter surface on the spindle, mounted on a common arm. An actuator arm (or access arm) moves the heads on an arc (roughly radially) across the platters as they spin, allowing each head to access almost the entire surface of the platter as it spins. The arm is moved using a voice coil actuator or in some older designs a stepper motor. Early hard disk drives wrote data at some constant bits per second, resulting in all tracks having the same amount of data per track but modern drives (since the 1990s) use zone bit recording – increasing the write speed from inner to outer zone and thereby storing more data per track in the outer zones.
In modern drives, the small size of the magnetic regions creates the danger that their magnetic state might be lost because of thermal effects — thermally induced magnetic instability which is commonly known as the "superparamagnetic limit". To counter this, the platters are coated with two parallel magnetic layers, separated by a three-atom layer of the non-magnetic element ruthenium, and the two layers are magnetized in opposite orientation, thus reinforcing each other. Another technology used to overcome thermal effects to allow greater recording densities is perpendicular recording, first shipped in 2005, and used in certain HDDs.
In 2004, a higher-density recording media was introduced, consisting of coupled soft and hard magnetic layers. So-called exchange spring media magnetic storage technology, also known as exchange coupled composite media, allows good writability due to the write-assist nature of the soft layer. However, the thermal stability is determined only by the hardest layer and not influenced by the soft layer.
Components
A typical HDD has two electric motors: a spindle motor that spins the disks and an actuator (motor) that positions the read/write head assembly across the spinning disks. The disk motor has an external rotor attached to the disks; the stator windings are fixed in place. Opposite the actuator at the end of the head support arm is the read-write head; thin printed-circuit cables connect the read-write heads to amplifier electronics mounted at the pivot of the actuator. The head support arm is very light, but also stiff; in modern drives, acceleration at the head reaches 550 g.
The is a permanent magnet and moving coil motor that swings the heads to the desired position. A metal plate supports a squat neodymium-iron-boron (NIB) high-flux magnet. Beneath this plate is the moving coil, often referred to as the voice coil by analogy to the coil in loudspeakers, which is attached to the actuator hub, and beneath that is a second NIB magnet, mounted on the bottom plate of the motor (some drives have only one magnet).
The voice coil itself is shaped rather like an arrowhead and is made of doubly coated copper magnet wire. The inner layer is insulation, and the outer is thermoplastic, which bonds the coil together after it is wound on a form, making it self-supporting. The portions of the coil along the two sides of the arrowhead (which point to the center of the actuator bearing) then interact with the magnetic field of the fixed magnet. Current flowing radially outward along one side of the arrowhead and radially inward on the other produces the tangential force. If the magnetic field were uniform, each side would generate opposing forces that would cancel each other out. Therefore, the surface of the magnet is half north pole and half south pole, with the radial dividing line in the middle, causing the two sides of the coil to see opposite magnetic fields and produce forces that add instead of canceling. Currents along the top and bottom of the coil produce radial forces that do not rotate the head.
The HDD's electronics control the movement of the actuator and the rotation of the disk and perform reads and writes on demand from the disk controller. Feedback of the drive electronics is accomplished by means of special segments of the disk dedicated to servo feedback. These are either complete concentric circles (in the case of dedicated servo technology) or segments interspersed with real data (in the case of embedded servo technology). The servo feedback optimizes the signal-to-noise ratio of the GMR sensors by adjusting the voice coil of the actuated arm. The spinning of the disk also uses a servo motor. Modern disk firmware is capable of scheduling reads and writes efficiently on the platter surfaces and remapping sectors of the media that have failed.
Error rates and handling
Modern drives make extensive use of error correction codes (ECCs), particularly Reed–Solomon error correction. These techniques store extra bits, determined by mathematical formulas, for each block of data; the extra bits allow many errors to be corrected invisibly. The extra bits themselves take up space on the HDD, but allow higher recording densities to be employed without causing uncorrectable errors, resulting in much larger storage capacity. For example, a typical 1 TB hard disk with 512-byte sectors provides additional capacity of about 93 GB for the ECC data.
In the newest drives, , low-density parity-check codes (LDPC) were supplanting Reed–Solomon; LDPC codes enable performance close to the Shannon Limit and thus provide the highest storage density available.
Typical hard disk drives attempt to "remap" the data in a physical sector that is failing to a spare physical sector provided by the drive's "spare sector pool" (also called "reserve pool"), while relying on the ECC to recover stored data while the number of errors in a bad sector is still low enough. The S.M.A.R.T (Self-Monitoring, Analysis and Reporting Technology) feature counts the total number of errors in the entire HDD fixed by ECC (although not on all hard drives as the related S.M.A.R.T attributes "Hardware ECC Recovered" and "Soft ECC Correction" are not consistently supported), and the total number of performed sector remappings, as the occurrence of many such errors may predict an HDD failure.
The "No-ID Format", developed by IBM in the mid-1990s, contains information about which sectors are bad and where remapped sectors have been located.
Only a tiny fraction of the detected errors end up as not correctable. Examples of specified uncorrected bit read error rates include:
2013 specifications for enterprise SAS disk drives state the error rate to be one uncorrected bit read error in every 1016 bits read,
2018 specifications for consumer SATA hard drives state the error rate to be one uncorrected bit read error in every 1014 bits.
Within a given manufacturers model the uncorrected bit error rate is typically the same regardless of capacity of the drive.
The worst type of errors are silent data corruptions which are errors undetected by the disk firmware or the host operating system; some of these errors may be caused by hard disk drive malfunctions while others originate elsewhere in the connection between the drive and the host.
Development
The rate of areal density advancement was similar to Moore's law (doubling every two years) through 2010: 60% per year during 1988–1996, 100% during 1996–2003 and 30% during 2003–2010. Speaking in 1997, Gordon Moore called the increase "flabbergasting", while observing later that growth cannot continue forever. Price improvement decelerated to −12% per year during 2010–2017, as the growth of areal density slowed. The rate of advancement for areal density slowed to 10% per year during 2010–2016, and there was difficulty in migrating from perpendicular recording to newer technologies.
As bit cell size decreases, more data can be put onto a single drive platter. In 2013, a production desktop 3 TB HDD (with four platters) would have had an areal density of about 500 Gbit/in2 which would have amounted to a bit cell comprising about 18 magnetic grains (11 by 1.6 grains). Since the mid-2000s areal density progress has been challenged by a superparamagnetic trilemma involving grain size, grain magnetic strength and ability of the head to write. In order to maintain acceptable signal to noise smaller grains are required; smaller grains may self-reverse (electrothermal instability) unless their magnetic strength is increased, but known write head materials are unable to generate a strong enough magnetic field sufficient to write the medium in the increasingly smaller space taken by grains.
Magnetic storage technologies are being developed to address this trilemma, and compete with flash memory–based solid-state drives (SSDs). In 2013, Seagate introduced shingled magnetic recording (SMR), intended as something of a "stopgap" technology between PMR and Seagate's intended successor heat-assisted magnetic recording (HAMR), SMR utilises overlapping tracks for increased data density, at the cost of design complexity and lower data access speeds (particularly write speeds and random access 4k speeds).
By contrast, HGST (now part of Western Digital) focused on developing ways to seal helium-filled drives instead of the usual filtered air. Since turbulence and friction are reduced, higher areal densities can be achieved due to using a smaller track width, and the energy dissipated due to friction is lower as well, resulting in a lower power draw. Furthermore, more platters can be fit into the same enclosure space, although helium gas is notoriously difficult to prevent escaping. Thus, helium drives are completely sealed and do not have a breather port, unlike their air-filled counterparts.
Other recording technologies are either under research or have been commercially implemented to increase areal density, including Seagate's heat-assisted magnetic recording (HAMR). HAMR requires a different architecture with redesigned media and read/write heads, new lasers, and new near-field optical transducers. HAMR is expected to ship commercially in late 2020 or 2021. Technical issues delayed the introduction of HAMR by a decade, from earlier projections of 2009, 2015, 2016, and the first half of 2019. Some drives have adopted dual independent actuator arms to increase read/write speeds and compete with SSDs. HAMR's planned successor, bit-patterned recording (BPR), has been removed from the roadmaps of Western Digital and Seagate. Western Digital's microwave-assisted magnetic recording (MAMR), also referred to as energy-assisted magnetic recording (EAMR), was sampled in 2020, with the first EAMR drive, the Ultrastar HC550, shipping in late 2020. Two-dimensional magnetic recording (TDMR) and "current perpendicular to plane" giant magnetoresistance (CPP/GMR) heads have appeared in research papers. A 3D-actuated vacuum drive (3DHD) concept has been proposed.
The rate of areal density growth has dropped below the historical Moore's law rate of 40% per year. Depending upon assumptions on feasibility and timing of these technologies, Seagate forecasts that areal density will grow 20% per year during 2020–2034.
Capacity
The highest-capacity HDDs shipping commercially in 2021 are 20 TB.
The capacity of a hard disk drive, as reported by an operating system to the end user, is smaller than the amount stated by the manufacturer for several reasons: the operating system using some space, use of some space for data redundancy, and space use for file system structures. Also the difference in capacity reported in SI decimal prefixed units vs. binary prefixes can lead to a false impression of missing capacity.
Calculation
Modern hard disk drives appear to their host controller as a contiguous set of logical blocks, and the gross drive capacity is calculated by multiplying the number of blocks by the block size. This information is available from the manufacturer's product specification, and from the drive itself through use of operating system functions that invoke low-level drive commands.
Some older drives, e.g., IBM 1301, CKD, have variable length records and the capacity calculation must take into account the characteristics of the records. Some newer DASD simulate CKD, and the same capacity formulae apply.
The gross capacity of older sector-oriented HDDs is calculated as the product of the number of cylinders per recording zone, the number of bytes per sector (most commonly 512), and the count of zones of the drive. Some modern SATA drives also report cylinder-head-sector (CHS) capacities, but these are not physical parameters because the reported values are constrained by historic operating system interfaces. The C/H/S scheme has been replaced by logical block addressing (LBA), a simple linear addressing scheme that locates blocks by an integer index, which starts at LBA 0 for the first block and increments thereafter. When using the C/H/S method to describe modern large drives, the number of heads is often set to 64, although a typical modern hard disk drive has between one and four platters.
In modern HDDs, spare capacity for defect management is not included in the published capacity; however, in many early HDDs a certain number of sectors were reserved as spares, thereby reducing the capacity available to the operating system. Furthermore, many HDDs store their firmware in a reserved service zone, which is typically not accessible by the user, and is not included in the capacity calculation.
For RAID subsystems, data integrity and fault-tolerance requirements also reduce the realized capacity. For example, a RAID 1 array has about half the total capacity as a result of data mirroring, while a RAID 5 array with drives loses of capacity (which equals to the capacity of a single drive) due to storing parity information. RAID subsystems are multiple drives that appear to be one drive or more drives to the user, but provide fault tolerance. Most RAID vendors use checksums to improve data integrity at the block level. Some vendors design systems using HDDs with sectors of 520 bytes to contain 512 bytes of user data and eight checksum bytes, or by using separate 512-byte sectors for the checksum data.
Some systems may use hidden partitions for system recovery, reducing the capacity available to the end user without knowledge of special disk partitioning utilities like diskpart in Windows.
Formatting
Data is stored on a hard drive in a series of logical blocks. Each block is delimited by markers identifying its start and end, error detecting and correcting information, and space between blocks to allow for minor timing variations. These blocks often contained 512 bytes of usable data, but other sizes have been used. As drive density increased, an initiative known as Advanced Format extended the block size to 4096 bytes of usable data, with a resulting significant reduction in the amount of disk space used for block headers, error checking data, and spacing.
The process of initializing these logical blocks on the physical disk platters is called low-level formatting, which is usually performed at the factory and is not normally changed in the field. High-level formatting writes data structures used by the operating system to organize data files on the disk. This includes writing partition and file system structures into selected logical blocks. For example, some of the disk space will be used to hold a directory of disk file names and a list of logical blocks associated with a particular file.
Examples of partition mapping scheme include Master boot record (MBR) and GUID Partition Table (GPT). Examples of data structures stored on disk to retrieve files include the File Allocation Table (FAT) in the DOS file system and inodes in many UNIX file systems, as well as other operating system data structures (also known as metadata). As a consequence, not all the space on an HDD is available for user files, but this system overhead is usually small compared with user data.
Units
In the early days of computing the total capacity of HDDs was specified in 7 to 9 decimal digits frequently truncated with the idiom millions.
By the 1970s, the total capacity of HDDs was given by manufacturers using SI decimal prefixes such as megabytes (1 MB = 1,000,000 bytes), gigabytes (1 GB = 1,000,000,000 bytes) and terabytes (1 TB = 1,000,000,000,000 bytes). However, capacities of memory are usually quoted using a binary interpretation of the prefixes, i.e. using powers of 1024 instead of 1000.
Software reports hard disk drive or memory capacity in different forms using either decimal or binary prefixes. The Microsoft Windows family of operating systems uses the binary convention when reporting storage capacity, so an HDD offered by its manufacturer as a 1 TB drive is reported by these operating systems as a 931 GB HDD. Mac OS X 10.6 ("Snow Leopard") uses decimal convention when reporting HDD capacity. The default behavior of the command-line utility on Linux is to report the HDD capacity as a number of 1024-byte units.
The difference between the decimal and binary prefix interpretation caused some consumer confusion and led to class action suits against HDD manufacturers. The plaintiffs argued that the use of decimal prefixes effectively misled consumers while the defendants denied any wrongdoing or liability, asserting that their marketing and advertising complied in all respects with the law and that no class member sustained any damages or injuries.
Price evolution
HDD price per byte decreased at the rate of 40% per year during 1988–1996, 51% per year during 1996–2003 and 34% per year during 2003–2010. The price decrease slowed down to 13% per year during 2011–2014, as areal density increase slowed and the 2011 Thailand floods damaged manufacturing facilities and have held at 11% per year during 2010–2017.
The Federal Reserve Board has published a quality-adjusted price index for large-scale enterprise storage systems including three or more enterprise HDDs and associated controllers, racks and cables. Prices for these large-scale storage systems decreased at the rate of 30% per year during 2004–2009 and 22% per year during 2009–2014.
Form factors
IBM's first hard disk drive, the IBM 350, used a stack of fifty 24-inch platters, stored 3.75 MB of data (approximately the size of one modern digital picture), and was of a size comparable to two large refrigerators. In 1962, IBM introduced its model 1311 disk, which used six 14-inch (nominal size) platters in a removable pack and was roughly the size of a washing machine. This became a standard platter size for many years, used also by other manufacturers. The IBM 2314 used platters of the same size in an eleven-high pack and introduced the "drive in a drawer" layout. sometimes called the"pizza oven", although the "drawer" was not the complete drive. Into the 1970s HDDs were offered in standalone cabinets of varying dimensions containing from one to four HDDs.
Beginning in the late 1960s drives were offered that fit entirely into a chassis that would mount in a 19-inch rack. Digital's RK05 and RL01 were early examples using single 14-inch platters in removable packs, the entire drive fitting in a 10.5-inch-high rack space (six rack units). In the mid-to-late 1980s the similarly sized Fujitsu Eagle, which used (coincidentally) 10.5-inch platters, was a popular product.
With increasing sales of microcomputers having built in floppy-disk drives (FDDs), HDDs that would fit to the FDD mountings became desirable. Starting with the Shugart Associates SA1000, HDD form factors initially followed those of 8-inch, 5¼-inch, and 3½-inch floppy disk drives. Although referred to by these nominal sizes, the actual sizes for those three drives respectively are 9.5", 5.75" and 4" wide. Because there were no smaller floppy disk drives, smaller HDD form factors such as 2½-inch drives (actually 2.75" wide) developed from product offerings or industry standards.
, 2½-inch and 3½-inch hard disks are the most popular sizes. By 2009, all manufacturers had discontinued the development of new products for the 1.3-inch, 1-inch and 0.85-inch form factors due to falling prices of flash memory, which has no moving parts. While nominal sizes are in inches, actual dimensions are specified in millimeters.
Performance characteristics
The factors that limit the time to access the data on an HDD are mostly related to the mechanical nature of the rotating disks and moving heads, including:
Seek time is a measure of how long it takes the head assembly to travel to the track of the disk that contains data.
Rotational latency is incurred because the desired disk sector may not be directly under the head when data transfer is requested. Average rotational latency is shown in the table, based on the statistical relation that the average latency is one-half the rotational period.
The bit rate or data transfer rate (once the head is in the right position) creates delay which is a function of the number of blocks transferred; typically relatively small, but can be quite long with the transfer of large contiguous files.
Delay may also occur if the drive disks are stopped to save energy.
Defragmentation is a procedure used to minimize delay in retrieving data by moving related items to physically proximate areas on the disk. Some computer operating systems perform defragmentation automatically. Although automatic defragmentation is intended to reduce access delays, performance will be temporarily reduced while the procedure is in progress.
Time to access data can be improved by increasing rotational speed (thus reducing latency) or by reducing the time spent seeking. Increasing areal density increases throughput by increasing data rate and by increasing the amount of data under a set of heads, thereby potentially reducing seek activity for a given amount of data. The time to access data has not kept up with throughput increases, which themselves have not kept up with growth in bit density and storage capacity.
Latency
Data transfer rate
, a typical 7,200-rpm desktop HDD has a sustained "disk-to-buffer" data transfer rate up to 1,030 Mbit/s. This rate depends on the track location; the rate is higher for data on the outer tracks (where there are more data sectors per rotation) and lower toward the inner tracks (where there are fewer data sectors per rotation); and is generally somewhat higher for 10,000-rpm drives. A current widely used standard for the "buffer-to-computer" interface is 3.0 Gbit/s SATA, which can send about 300 megabyte/s (10-bit encoding) from the buffer to the computer, and thus is still comfortably ahead of today's disk-to-buffer transfer rates. Data transfer rate (read/write) can be measured by writing a large file to disk using special file generator tools, then reading back the file. Transfer rate can be influenced by file system fragmentation and the layout of the files.
HDD data transfer rate depends upon the rotational speed of the platters and the data recording density. Because heat and vibration limit rotational speed, advancing density becomes the main method to improve sequential transfer rates. Higher speeds require a more powerful spindle motor, which creates more heat. While areal density advances by increasing both the number of tracks across the disk and the number of sectors per track, only the latter increases the data transfer rate for a given rpm. Since data transfer rate performance tracks only one of the two components of areal density, its performance improves at a lower rate.
Other considerations
Other performance considerations include quality-adjusted price, power consumption, audible noise, and both operating and non-operating shock resistance.
Access and interfaces
Current hard drives connect to a computer over one of several bus types, including parallel ATA, Serial ATA, SCSI, Serial Attached SCSI (SAS), and Fibre Channel. Some drives, especially external portable drives, use IEEE 1394, or USB. All of these interfaces are digital; electronics on the drive process the analog signals from the read/write heads. Current drives present a consistent interface to the rest of the computer, independent of the data encoding scheme used internally, and independent of the physical number of disks and heads within the drive.
Typically a DSP in the electronics inside the drive takes the raw analog voltages from the read head and uses PRML and Reed–Solomon error correction to decode the data, then sends that data out the standard interface. That DSP also watches the error rate detected by error detection and correction, and performs bad sector remapping, data collection for Self-Monitoring, Analysis, and Reporting Technology, and other internal tasks.
Modern interfaces connect the drive to the host interface with a single data/control cable. Each drive also has an additional power cable, usually direct to the power supply unit. Older interfaces had separate cables for data signals and for drive control signals.
Small Computer System Interface (SCSI), originally named SASI for Shugart Associates System Interface, was standard on servers, workstations, Commodore Amiga, Atari ST and Apple Macintosh computers through the mid-1990s, by which time most models had been transitioned to newer interfaces. The length limit of the data cable allows for external SCSI devices. The SCSI command set is still used in the more modern SAS interface.
Integrated Drive Electronics (IDE), later standardized under the name AT Attachment (ATA, with the alias PATA (Parallel ATA) retroactively added upon introduction of SATA) moved the HDD controller from the interface card to the disk drive. This helped to standardize the host/controller interface, reduce the programming complexity in the host device driver, and reduced system cost and complexity. The 40-pin IDE/ATA connection transfers 16 bits of data at a time on the data cable. The data cable was originally 40-conductor, but later higher speed requirements led to an "ultra DMA" (UDMA) mode using an 80-conductor cable with additional wires to reduce crosstalk at high speed.
EIDE was an unofficial update (by Western Digital) to the original IDE standard, with the key improvement being the use of direct memory access (DMA) to transfer data between the disk and the computer without the involvement of the CPU, an improvement later adopted by the official ATA standards. By directly transferring data between memory and disk, DMA eliminates the need for the CPU to copy byte per byte, therefore allowing it to process other tasks while the data transfer occurs.
Fibre Channel (FC) is a successor to parallel SCSI interface on enterprise market. It is a serial protocol. In disk drives usually the Fibre Channel Arbitrated Loop (FC-AL) connection topology is used. FC has much broader usage than mere disk interfaces, and it is the cornerstone of storage area networks (SANs). Recently other protocols for this field, like iSCSI and ATA over Ethernet have been developed as well. Confusingly, drives usually use copper twisted-pair cables for Fibre Channel, not fibre optics. The latter are traditionally reserved for larger devices, such as servers or disk array controllers.
Serial Attached SCSI (SAS). The SAS is a new generation serial communication protocol for devices designed to allow for much higher speed data transfers and is compatible with SATA. SAS uses a mechanically compatible data and power connector to standard 3.5-inch SATA1/SATA2 HDDs, and many server-oriented SAS RAID controllers are also capable of addressing SATA HDDs. SAS uses serial communication instead of the parallel method found in traditional SCSI devices but still uses SCSI commands.
Serial ATA (SATA). The SATA data cable has one data pair for differential transmission of data to the device, and one pair for differential receiving from the device, just like EIA-422. That requires that data be transmitted serially. A similar differential signaling system is used in RS485, LocalTalk, USB, FireWire, and differential SCSI. SATA I to III are designed to be compatible with, and use, a subset of SAS commands, and compatible interfaces. Therefore, a SATA hard drive can be connected to and controlled by a SAS hard drive controller (with some minor exceptions such as drives/controllers with limited compatibility). However they cannot be connected the other way round—a SATA controller cannot be connected to a SAS drive.
Integrity and failure
Due to the extremely close spacing between the heads and the disk surface, HDDs are vulnerable to being damaged by a head crash – a failure of the disk in which the head scrapes across the platter surface, often grinding away the thin magnetic film and causing data loss. Head crashes can be caused by electronic failure, a sudden power failure, physical shock, contamination of the drive's internal enclosure, wear and tear, corrosion, or poorly manufactured platters and heads.
The HDD's spindle system relies on air density inside the disk enclosure to support the heads at their proper flying height while the disk rotates. HDDs require a certain range of air densities to operate properly. The connection to the external environment and density occurs through a small hole in the enclosure (about 0.5 mm in breadth), usually with a filter on the inside (the breather filter). If the air density is too low, then there is not enough lift for the flying head, so the head gets too close to the disk, and there is a risk of head crashes and data loss. Specially manufactured sealed and pressurized disks are needed for reliable high-altitude operation, above about . Modern disks include temperature sensors and adjust their operation to the operating environment. Breather holes can be seen on all disk drives – they usually have a sticker next to them, warning the user not to cover the holes. The air inside the operating drive is constantly moving too, being swept in motion by friction with the spinning platters. This air passes through an internal recirculation (or "recirc") filter to remove any leftover contaminants from manufacture, any particles or chemicals that may have somehow entered the enclosure, and any particles or outgassing generated internally in normal operation. Very high humidity present for extended periods of time can corrode the heads and platters. An exception to this are hermetically sealed, helium filled HDDs that largely eliminate environmental issues that can arise due to humidity or atmospheric pressure changes. Such HDDs were introduced by HGST in their first successful high volume implementation in 2013.
For giant magnetoresistive (GMR) heads in particular, a minor head crash from contamination (that does not remove the magnetic surface of the disk) still results in the head temporarily overheating, due to friction with the disk surface, and can render the data unreadable for a short period until the head temperature stabilizes (so called "thermal asperity", a problem which can partially be dealt with by proper electronic filtering of the read signal).
When the logic board of a hard disk fails, the drive can often be restored to functioning order and the data recovered by replacing the circuit board with one of an identical hard disk. In the case of read-write head faults, they can be replaced using specialized tools in a dust-free environment. If the disk platters are undamaged, they can be transferred into an identical enclosure and the data can be copied or cloned onto a new drive. In the event of disk-platter failures, disassembly and imaging of the disk platters may be required. For logical damage to file systems, a variety of tools, including fsck on UNIX-like systems and CHKDSK on Windows, can be used for data recovery. Recovery from logical damage can require file carving.
A common expectation is that hard disk drives designed and marketed for server use will fail less frequently than consumer-grade drives usually used in desktop computers. However, two independent studies by Carnegie Mellon University and Google found that the "grade" of a drive does not relate to the drive's failure rate.
A 2011 summary of research, into SSD and magnetic disk failure patterns by Tom's Hardware summarized research findings as follows:
Mean time between failures (MTBF) does not indicate reliability; the annualized failure rate is higher and usually more relevant.
HDDs do not tend to fail during early use, and temperature has only a minor effect; instead, failure rates steadily increase with age.
S.M.A.R.T. warns of mechanical issues but not other issues affecting reliability, and is therefore not a reliable indicator of condition.
Failure rates of drives sold as "enterprise" and "consumer" are "very much similar", although these drive types are customized for their different operating environments.
In drive arrays, one drive's failure significantly increases the short-term risk of a second drive failing.
, Backblaze, a storage provider reported an annualized failure rate of two percent per year for a storage farm with 110,000 off-the-shelf HDDs with the reliability varying widely between models and manufacturers. Backblaze subsequently reported that the failure rate for HDDs and SSD of equivalent age was similar.
To minimize cost and overcome failures of individual HDDs, storage systems providers rely on redundant HDD arrays. HDDs that fail are replaced on an ongoing basis.
Market segments
Consumer segment
Desktop HDDs
Desktop HDDs typically have two to five internal platters, rotate at 5,400 to 10,000 rpm, and have a media transfer rate of 0.5 Gbit/s or higher (1 GB = 109 bytes; 1 Gbit/s = 109 bit/s). Earlier (1980-1990s) drives tend to be slower in rotation speed. , the highest-capacity desktop HDDs stored 16 TB, with plans to release 18 TB drives later in 2019. 18 TB HDDs were released in 2020. , the typical speed of a hard drive in an average desktop computer is 7,200 RPM, whereas low-cost desktop computers may use 5,900 RPM or 5,400 RPM drives. For some time in the 2000s and early 2010s some desktop users and data centers also used 10,000 RPM drives such as Western Digital Raptor but such drives have become much rarer and are not commonly used now, having been replaced by NAND flash-based SSDs.
Mobile (laptop) HDDs
Smaller than their desktop and enterprise counterparts, they tend to be slower and have lower capacity, because typically has one internal platter and were 2.5" or 1.8" physical size instead of more common for desktops 3.5" form-factor. Mobile HDDs spin at 4,200 rpm, 5,200 rpm, 5,400 rpm, or 7,200 rpm, with 5,400 rpm being the most common. 7,200 rpm drives tend to be more expensive and have smaller capacities, while 4,200 rpm models usually have very high storage capacities. Because of smaller platter(s), mobile HDDs generally have lower capacity than their desktop counterparts.
Consumer electronics HDDs
They include drives embedded into digital video recorders and automotive vehicles. The former are configured to provide a guaranteed streaming capacity, even in the face of read and write errors, while the latter are built to resist larger amounts of shock. They usually spin at a speed of 5400 RPM.
External and portable HDDs
Current external hard disk drives typically connect via USB-C; earlier models use an regular USB (sometimes with using of a pair of ports for better bandwidth) or (rarely), e.g., eSATA connection. Variants using USB 2.0 interface generally have slower data transfer rates when compared to internally mounted hard drives connected through SATA. Plug and play drive functionality offers system compatibility and features large storage options and portable design. , available capacities for external hard disk drives ranged from 500 GB to 10 TB. External hard disk drives are usually available as assembled integrated products but may be also assembled by combining an external enclosure (with USB or other interface) with a separately purchased drive. They are available in 2.5-inch and 3.5-inch sizes; 2.5-inch variants are typically called portable external drives, while 3.5-inch variants are referred to as desktop external drives. "Portable" drives are packaged in smaller and lighter enclosures than the "desktop" drives; additionally, "portable" drives use power provided by the USB connection, while "desktop" drives require external power bricks. Features such as encryption, Wi-Fi connectivity, biometric security or multiple interfaces (for example, FireWire) are available at a higher cost. There are pre-assembled external hard disk drives that, when taken out from their enclosures, cannot be used internally in a laptop or desktop computer due to embedded USB interface on their printed circuit boards, and lack of SATA (or Parallel ATA) interfaces.
Enterprise and business segment
Server and workstation HDDs
Typically used with multiple-user computers running enterprise software. Examples are: transaction processing databases, internet infrastructure (email, webserver, e-commerce), scientific computing software, and nearline storage management software. Enterprise drives commonly operate continuously ("24/7") in demanding environments while delivering the highest possible performance without sacrificing reliability. Maximum capacity is not the primary goal, and as a result the drives are often offered in capacities that are relatively low in relation to their cost.
The fastest enterprise HDDs spin at 10,000 or 15,000 rpm, and can achieve sequential media transfer speeds above 1.6 Gbit/s and a sustained transfer rate up to 1 Gbit/s. Drives running at 10,000 or 15,000 rpm use smaller platters to mitigate increased power requirements (as they have less air drag) and therefore generally have lower capacity than the highest capacity desktop drives. Enterprise HDDs are commonly connected through Serial Attached SCSI (SAS) or Fibre Channel (FC). Some support multiple ports, so they can be connected to a redundant host bus adapter.
Enterprise HDDs can have sector sizes larger than 512 bytes (often 520, 524, 528 or 536 bytes). The additional per-sector space can be used by hardware RAID controllers or applications for storing Data Integrity Field (DIF) or Data Integrity Extensions (DIX) data, resulting in higher reliability and prevention of silent data corruption.
Video recording HDDs
This line were similar to consumer video recording HDDs with stream stability requirements and similar to server HDDs with requirements to expandability support, but also they strongly oriented for growing of internal capacity. The main sacrifice for this segment is a writing and reading speed.
Manufacturers and sales
More than 200 companies have manufactured HDDs over time, but consolidations have concentrated production to just three manufacturers today: Western Digital, Seagate, and Toshiba. Production is mainly in the Pacific rim.
Worldwide revenue for disk storage declined eight percent per year, from a peak of $38 billion in 2012 to $22 billion (estimated) in 2019. Production of HDD storage grew 15% per year during 2011–2017, from 335 to 780 exabytes per year. HDD shipments declined seven percent per year during this time period, from 620 to 406 million units. HDD shipments were projected to drop by 18% during 2018–2019, from 375 million to 309 million units. In 2018, Seagate has 40% of unit shipments, Western Digital has 37% of unit shipments, while Toshiba has 23% of unit shipments. The average sales price for the two largest manufacturers was $60 per unit in 2015.
Competition from SSDs
HDDs are being superseded by solid-state drives (SSDs) in markets where their higher speed (up to 4950 megabytes) (4.95 gigabytes) per second for M.2 (NGFF) NVMe SSDs, or 2500 megabytes (2.5 gigabytes) per second for PCIe expansion card drives), ruggedness, and lower power are more important than price, since the bit cost of SSDs is four to nine times higher than HDDs. , HDDs are reported to have a failure rate of 2–9% per year, while SSDs have fewer failures: 1–3% per year. However, SSDs have more un-correctable data errors than HDDs.
SSDs offer larger capacities (up to 100 TB) than the largest HDD and/or higher storage densities (100 TB and 30 TB SSDs are housed in 2.5 inch HDD cases but with the same height as a 3.5-inch HDD), although their cost remains prohibitive.
A laboratory demonstration of a 1.33-Tb 3D NAND chip with 96 layers (NAND commonly used in solid state drives (SSDs)) had 5.5 Tbit/in2 , while the maximum areal density for HDDs is 1.5 Tbit/in2. The areal density of flash memory is doubling every two years, similar to Moore's law (40% per year) and faster than the 10–20% per year for HDDs. , the maximum capacity was 16 terabytes for an HDD, and 100 terabytes for an SSD. HDDs were used in 70% of the desktop and notebook computers produced in 2016, and SSDs were used in 30%. The usage share of HDDs is declining and could drop below 50% in 2018–2019 according to one forecast, because SSDs are replacing smaller-capacity (less than one-terabyte) HDDs in desktop and notebook computers and MP3 players.
The market for silicon-based flash memory (NAND) chips, used in SSDs and other applications, is growing faster than for HDDs. Worldwide NAND revenue grew 16% per year from $22 billion to $57 billion during 2011–2017, while production grew 45% per year from 19 exabytes to 175 exabytes.
See also
Automatic acoustic management
Cleanroom
Click of death
Comparison of disk encryption software
Data erasure
Drive mapping
Error recovery control
Hard disk drive performance characteristics
Hybrid drive
Microdrive
Network drive (file server, shared resource)
Object storage
Write precompensation
Notes
References
Further reading
External links
Hard Disk Drives Encyclopedia
Video showing an opened HD working
Average seek time of a computer disk
Timeline: 50 Years of Hard Drives
HDD from inside: Tracks and Zones. How hard it can be?
Hard disk hacking firmware modifications, in eight parts, going as far as booting a Linux kernel on an ordinary HDD controller board
Hiding Data in Hard Drive’s Service Areas, February 14, 2013, by Ariel Berkman
Rotary Acceleration Feed Forward (RAFF) Information Sheet, Western Digital, January 2013
PowerChoice Technology for Hard Disk Drive Power Savings and Flexibility, Seagate Technology, March 2010
Shingled Magnetic Recording (SMR), HGST, Inc., 2015
The Road to Helium, HGST, Inc., 2015
Research paper about perspective usage of magnetic photoconductors in magneto-optical data storage.
American inventions
Articles containing video clips
Computer data storage
Computer storage devices
Rotating disc computer storage media
20th-century inventions |
13919 | https://en.wikipedia.org/wiki/Hezbollah | Hezbollah | Hezbollah (; , lit. "Party of Allah" or "Party of God", also transliterated Hizbullah or Hizballah, among others) is a Lebanese Shia Islamist political party and militant group, led by its Secretary-General Hassan Nasrallah since 1992. Hezbollah's paramilitary wing is the Jihad Council, and its political wing is the Loyalty to the Resistance Bloc party in the Lebanese Parliament.
After the Israeli invasion of Lebanon in 1982, the idea of Hezbollah arose among Lebanese clerics who had studied in Najaf, and who adopted the model set out by Ayatollah Khomeini after the Iranian Revolution in 1979. The organization was established as part of an Iranian effort, through funding and the dispatch of a core group of Islamic Revolutionary Guard Corps (pasdaran) instructors, to aggregate a variety of Lebanese Shia groups into a unified organization to resist the Israeli occupation and improve the standing and status of the long marginalised and underrepresented Shia community in that country. A contingent of 1,500 pasdaran instructors arrived after the Syrian government, which occupied Lebanon's eastern highlands, permitted their transit to a base in the Bekaa valley.
During the Lebanese Civil War, Hezbollah's 1985 manifesto listed its objectives as the expulsion of "the Americans, the French and their allies definitely from Lebanon, putting an end to any colonialist entity on our land", the submission of the Christian Phalangists to "just power", bringing them to justice "for the crimes they have perpetrated against Muslims and Christians", and permitting "all the sons of our people" to choose the form of government they want, while calling on them to "pick the option of Islamic government". Hezbollah organised volunteers who fought for the Army of the Republic of Bosnia and Herzegovina during the Bosnian War. From 1985 to 2000, Hezbollah participated in the South Lebanon conflict against the South Lebanon Army (SLA) and Israel Defense Forces (IDF), which finally led to the rout of the SLA and the retreat of the IDF from South Lebanon in 2000. Hezbollah and the IDF fought each other again in the 2006 Lebanon War.
Its military strength has grown so significantly since 2006 that its paramilitary wing is considered more powerful than the Lebanese Army. Hezbollah has been described as a "state within a state" and has grown into an organization with seats in the Lebanese government, a radio and a satellite TV station, social services and large-scale military deployment of fighters beyond Lebanon's borders. Hezbollah is part of Lebanon's March 8 Alliance, in opposition to the March 14 Alliance. It maintains strong support among Lebanese Shia Muslims, while Sunnis have disagreed with its agenda. Hezbollah also has support in some Christian areas of Lebanon. It receives military training, weapons, and financial support from Iran and political support from Syria.
Since 1990, Hezbollah has participated in Lebanese politics, in a process which is described as the Lebanonisation of Hezbollah, and it later participated in the government of Lebanon and joined political alliances. After the 2006–08 Lebanese protests and clashes, a national unity government was formed in 2008, with Hezbollah and its opposition allies obtaining 11 of 30 cabinet seats, enough to give them veto power. In August 2008, Lebanon's new cabinet unanimously approved a draft policy statement that recognizes Hezbollah's existence as an armed organization and guarantees its right to "liberate or recover occupied lands" (such as the Shebaa Farms). Since 2012, Hezbollah involvement in the Syrian civil war has seen it join the Syrian government in its fight against the Syrian opposition, which Hezbollah has described as a Zionist plot and a "Wahhabi-Zionist conspiracy" to destroy its alliance with Bashar al-Assad against Israel. Between 2013 and 2015, the organisation deployed its militia in both Syria and Iraq to fight or train local militias to fight against the Islamic State. The group's legitimacy is considered to have been severely damaged due to the sectarian nature of the Syrian war. In the 2018 Lebanese general election, Hezbollah held 12 seats and its alliance won the election by gaining 70 out of 128 seats in the Parliament of Lebanon. Nasrallah declared on 2021 that the group has 100,000 fighters.
Either the entire organization or only its military wing has been designated a terrorist organization by several countries, including by the European Union and, since 2017, also by most member states of the Arab League, with two exceptions – Lebanon, where Hezbollah is the most powerful political party, and Iraq. Russia does not view Hezbollah as a "terrorist organization" but as a "legitimate socio-political force".
History
Foundation
In 1982, Hezbollah was conceived by Muslim clerics and funded by Iran primarily to harass the Israeli invasion of Lebanon. Its leaders were followers of Ayatollah Khomeini, and its forces were trained and organized by a contingent of 1,500 Revolutionary Guards that arrived from Iran with permission from the Syrian government, which occupied Lebanon's eastern highlands, permitted their transit to a base in the Bekaa valley which was in occupation of Lebanon at the time.
Scholars differ as to when Hezbollah came to be a distinct entity. Various sources list the official formation of the group as early as 1982 whereas Diaz and Newman maintain that Hezbollah remained an amalgamation of various violent Shi'a extremists until as late as 1985. Another version states that it was formed by supporters of Sheikh Ragheb Harb, a leader of the southern Shia resistance killed by Israel in 1984. Regardless of when the name came into official use, a number of Shi'a groups were slowly assimilated into the organization, such as Islamic Jihad, Organization of the Oppressed on Earth and the Revolutionary Justice Organization . These designations are considered to be synonymous with Hezbollah by the US, Israel and Canada.
1980s
Hezbollah emerged in South Lebanon during a consolidation of Shia militias as a rival to the older Amal Movement. Hezbollah played a significant role in the Lebanese civil war, opposing American forces in 1982–83 and opposing Amal and Syria during the 1985–88 War of the Camps. However, Hezbollah's early primary focus was ending Israel's occupation of southern Lebanon following Israel's 1982 invasion and siege of Beirut. Amal, the main Lebanese Shia political group, initiated guerrilla warfare. In 2006, former Israeli prime minister Ehud Barak stated, "When we entered Lebanon … there was no Hezbollah. We were accepted with perfumed rice and flowers by the Shia in the south. It was our presence there that created Hezbollah".
Hezbollah waged an asymmetric war using suicide attacks against the Israel Defense Forces (IDF) and Israeli targets outside of Lebanon. Hezbollah is reputed to have been among the first Islamic resistance groups in the Middle East to use the tactics of suicide bombing, assassination, and capturing foreign soldiers, as well as murders and hijackings. Hezbollah also employed more conventional military tactics and weaponry, notably Katyusha rockets and other missiles. At the end of the Lebanese Civil War in 1990, despite the Taif Agreement asking for the "disbanding of all Lebanese and non-Lebanese militias," Syria, which controlled Lebanon at that time, allowed Hezbollah to maintain their arsenal and control Shia areas along the border with Israel.
After 1990
In the 1990s, Hezbollah transformed from a revolutionary group into a political one, in a process which is described as the Lebanonisation of Hezbollah. Unlike its uncompromising revolutionary stance in the 1980s, Hezbollah conveyed a lenient stance towards the Lebanese state.
In 1992 Hezbollah decided to participate in elections, and Ali Khamenei, supreme leader of Iran, endorsed it. Former Hezbollah secretary general, Subhi al-Tufayli, contested this decision, which led to a schism in Hezbollah. Hezbollah won all twelve seats which were on its electoral list. At the end of that year, Hezbollah began to engage in dialog with Lebanese Christians. Hezbollah regards cultural, political, and religious freedoms in Lebanon as sanctified, although it does not extend these values to groups who have relations with Israel.
In 1997 Hezbollah formed the multi-confessional Lebanese Brigades to Fighting the Israeli Occupation in an attempt to revive national and secular resistance against Israel, thereby marking the "Lebanonisation" of resistance.
Islamic Jihad Organization (IJO)
Whether the Islamic Jihad Organization (IJO) was a nom de guerre used by Hezbollah or a separate organization, is disputed. According to certain sources, IJO was identified as merely a "telephone organization", and whose name was "used by those involved to disguise their true identity." Hezbollah reportedly also used another name, "Islamic Resistance" (al-Muqawama al-Islamiyya), for attacks against Israel.
A 2003 American court decision found IJO was the name used by Hezbollah for its attacks in Lebanon, parts of the Middle East and Europe. The US, Israel and Canada consider the names "Islamic Jihad Organization", "Organization of the Oppressed on Earth" and the "Revolutionary Justice Organization" to be synonymous with Hezbollah.
Ideology
The ideology of Hezbollah has been summarized as Shi'i radicalism; Hezbollah follows the Islamic Shi'a theology developed by Iranian leader Ayatollah Ruhollah Khomeini. Hezbollah was largely formed with the aid of the Ayatollah Khomeini's followers in the early 1980s in order to spread Islamic revolution and follows a distinct version of Islamic Shi'a ideology (Wilayat al-faqih or Guardianship of the Islamic Jurists) developed by Ayatollah Ruhollah Khomeini, leader of the "Islamic Revolution" in Iran. Although Hezbollah originally aimed to transform Lebanon into a formal Faqihi Islamic republic, this goal has been abandoned in favor of a more inclusive approach.
1985 manifesto
On 16 February 1985, Sheik Ibrahim al-Amin issued Hezbollah's manifesto. The ideology presented in it was described as radical. Its first objective was to fight against what Hezbollah described as American and Israeli imperialism, including the Israeli occupation of Southern Lebanon and other territories. The second objective was to gather all Muslims into an "ummah", under which Lebanon would further the aims of the 1979 Revolution of Iran. It also declared it would protect all Lebanese communities, excluding those that collaborated with Israel, and support all national movements—both Muslim and non-Muslim—throughout the world. The ideology has since evolved, and today Hezbollah is a left-wing political entity focused on social injustice.
Translated excerpts from Hezbollah's original 1985 manifesto read:
Attitudes, statements, and actions concerning Israel and Zionism
From the inception of Hezbollah to the present, the elimination of the State of Israel has been one of Hezbollah's primary goals. Some translations of Hezbollah's 1985 Arabic-language manifesto state that "our struggle will end only when this entity [Israel] is obliterated". According to Hezbollah's Deputy-General, Naim Qassem, the struggle against Israel is a core belief of Hezbollah and the central rationale of Hezbollah's existence.
Hezbollah says that its continued hostilities against Israel are justified as reciprocal to Israeli operations against Lebanon and as retaliation for what they claim is Israel's occupation of Lebanese territory. Israel withdrew from Lebanon in 2000, and their withdrawal was verified by the United Nations as being in accordance with resolution 425 of 19 March 1978, however Lebanon considers the Shebaa farms—a 26-km2 (10-mi2) piece of land captured by Israel from Syria in the 1967 war and considered by the UN to be Syrian territory occupied by Israel—to be Lebanese territory. Finally, Hezbollah consider Israel to be an illegitimate state. For these reasons, they justify their actions as acts of defensive jihad.
Attitudes and actions concerning Jews and Judaism
Hezbollah officials have said, on rare occasions, that it is only "anti-Zionist" and not anti-Semitic. However, according to scholars, "these words do not hold up upon closer examination". Among other actions, Hezbollah actively engages in Holocaust denial and spreads anti-Semitic conspiracy theories.
Various antisemitic statements have been attributed to Hezbollah officials. Amal Saad-Ghorayeb, a Lebanese political analyst, argues that although Zionism has influenced Hezbollah's anti-Judaism, "it is not contingent upon it because Hezbollah's hatred of Jews is more religiously motivated than politically motivated". Robert S. Wistrich, a historian specializing in the study of anti-Semitism, described Hezbollah's ideology concerning Jews:
The anti-Semitism of Hezbollah leaders and spokesmen combines the image of seemingly invincible Jewish power ... and cunning with the contempt normally reserved for weak and cowardly enemies. Like the Hamas propaganda for holy war, that of Hezbollah has relied on the endless vilification of Jews as 'enemies of mankind,' 'conspiratorial, obstinate, and conceited' adversaries full of 'satanic plans' to enslave the Arabs. It fuses traditional Islamic anti-Judaism with Western conspiracy myths, Third Worldist anti-Zionism, and Iranian Shiite contempt for Jews as 'ritually impure' and corrupt infidels. Sheikh Fadlallah typically insists ... that Jews wish to undermine or obliterate Islam and Arab cultural identity in order to advance their economic and political domination.
Conflicting reports say Al-Manar, the Hezbollah-owned and operated television station, accused either Israel or Jews of deliberately spreading HIV and other diseases to Arabs throughout the Middle East. Al-Manar was criticized in the West for airing "anti-Semitic propaganda" in the form of a television drama depicting a Jewish world domination conspiracy theory. The group has been accused by American analysts of engaging in Holocaust denial. In addition, during its 2006 war, it apologized only for killing Israel's Arabs (i.e., non-Jews).
In November 2009, Hezbollah pressured a private English-language school to drop reading excerpts from The Diary of Anne Frank, a book of the writings from the diary kept by the Jewish child Anne Frank while she was in hiding with her family during the Nazi occupation of the Netherlands. This was after Hezbollah's Al-Manar television channel complained, asking how long Lebanon would "remain an open arena for the Zionist invasion of education?"
Organization
At the beginning many Hezbollah leaders have maintained that the movement was "not an organization, for its members carry no cards and bear no specific responsibilities," and that the movement does not have "a clearly defined organizational structure." Nowadays, as Hezbollah scholar Magnus Ranstorp reports, Hezbollah does indeed have a formal governing structure, and in keeping with the principle of Guardianship of the Islamic Jurists (velayat-e faqih), it "concentrate[s] ... all authority and powers" in its religious leaders, whose decisions then "flow from the ulama down the entire community."
The supreme decision-making bodies of the Hezbollah were divided between the Majlis al-Shura (Consultative Assembly) which was headed by 12 senior clerical members with responsibility for tactical decisions and supervision of overall Hizballah activity throughout Lebanon, and the Majlis al-Shura al-Karar (the Deciding Assembly), headed by Sheikh Muhammad Hussein Fadlallah and composed of eleven other clerics with responsibility for all strategic matters. Within the Majlis al-Shura, there existed seven specialized committees dealing with ideological, financial, military and political, judicial, informational and social affairs. In turn, the Majlis al-Shura and these seven committees were replicated in each of Hizballah's three main operational areas (the Beqaa, Beirut, and the South).
Since the Supreme Leader of Iran is the ultimate clerical authority, Hezbollah's leaders have appealed to him "for guidance and directives in cases when Hezbollah's collective leadership [was] too divided over issues and fail[ed] to reach a consensus." After the death of Iran's first Supreme Leader, Khomeini, Hezbollah's governing bodies developed a more "independent role" and appealed to Iran less often. Since the Second Lebanon War, however, Iran has restructured Hezbollah to limit the power of Hassan Nasrallah, and invested billions of dollars "rehabilitating" Hezbollah.
Structurally, Hezbollah does not distinguish between its political/social activities within Lebanon and its military/jihad activities against Israel. "Hezbollah has a single leadership," according to Naim Qassem, Hezbollah's second in command. "All political, social and jihad work is tied to the decisions of this leadership ... The same leadership that directs the parliamentary and government work also leads jihad actions in the struggle against Israel."
In 2010, Iran's parliamentary speaker Ali Larijani said, "Iran takes pride in Lebanon's Islamic resistance movement for its steadfast Islamic stance. Hezbollah nurtures the original ideas of Islamic Jihad." He also instead charged the West with having accused Iran with support of terrorism and said, "The real terrorists are those who provide the Zionist regime with military equipment to bomb the people."
Funding
Funding of Hezbollah comes from Lebanese business groups, private persons, businessmen, the Lebanese diaspora involved in African diamond exploration, other Islamic groups and countries, and the taxes paid by the Shia Lebanese. Hezbollah says that the main source of its income comes from its own investment portfolios and donations by Muslims.
Western sources maintain that Hezbollah actually receives most of its financial, training, weapons, explosives, political, diplomatic, and organizational aid from Iran and Syria. Iran is said to have given $400 million between 1983 and 1989 through donation. Due to economic problems, Iran temporarily limited funds to humanitarian actions carried on by Hezbollah. During the late 1980s, when there was extreme inflation due to the collapse of the Lira, it was estimated that Hezbollah was receiving $3-5 million a month from Iran.
According to reports released in February 2010, Hezbollah received $400 million from Iran. In 2011 Iran earmarked $7 million to Hezbollah's activities in Latin American.
Hezbollah has relied also on funding from the Shi'ite Lebanese Diaspora in West Africa, the United States and, most importantly, the Triple Frontier, or tri-border area, along the junction of Paraguay, Argentina, and Brazil. U.S. law enforcement officials have identified an illegal multimillion-dollar cigarette-smuggling fund raising operation and a drug smuggling operation. However, Nasrallah has repeatedly denied any links between the South American drug trade and Hezbollah, calling such accusations "propaganda" and attempts "to damage the image of Hezbollah".
As of 2018, Iranian monetary support for Hezbollah is estimated at $700 million per annum according to US estimates.
The United States has accused members of the Venezuelan government of providing financial aid to Hezbollah.
Social services
Hezbollah organizes and maintains an extensive social development program and runs hospitals, news services, educational facilities, and encouragement of Nikah mut'ah. One of its established institutions, Jihad Al Binna's Reconstruction Campaign, is responsible for numerous economic and infrastructure development projects in Lebanon. Hezbollah controls the Martyr's Institute (Al-Shahid Social Association), which pays stipends to "families of fighters who die" in battle. An IRIN news report of the UN Office for the Coordination of Humanitarian Affairs noted:
Hezbollah not only has armed and political wings—it also boasts an extensive social development program. Hezbollah currently operates at least four hospitals, twelve clinics, twelve schools and two agricultural centres that provide farmers with technical assistance and training. It also has an environmental department and an extensive social assistance program. Medical care is also cheaper than in most of the country's private hospitals and free for Hezbollah members.
According to CNN, "Hezbollah did everything that a government should do, from collecting the garbage to running hospitals and repairing schools." In July 2006, during the war with Israel, when there was no running water in Beirut, Hezbollah was arranging supplies around the city. Lebanese Shiites "see Hezbollah as a political movement and a social service provider as much as it is a militia." Hezbollah also rewards its guerrilla members who have been wounded in battle by taking them to Hezbollah-run amusement parks.
Hezbollah is, therefore, deeply embedded in the Lebanese society.
Political activities
Hezbollah along with Amal is one of two major political parties in Lebanon that represent Shiite Muslims. Unlike Amal, whose support is predominantly in the south of the country, Hezbollah maintains broad-based support in all three areas of Lebanon with a majority Shia Muslim population: in the south, in Beirut and its surrounding area, and in the northern Beqaa valley and Hirmil region. It holds 14 of the 128 seats in the Parliament of Lebanon and is a member of the Resistance and Development Bloc. According to Daniel L. Byman, it is "the most powerful single political movement in Lebanon." Hezbollah, along with the Amal Movement, represents most of Lebanese Shi'a. However, unlike Amal, Hezbollah has not disarmed. Hezbollah participates in the Parliament of Lebanon.
Political alliances
Hezbollah has been one of the main parties of the March 8 Alliance since March 2005. Although Hezbollah had joined the new government in 2005, it remained staunchly opposed to the March 14 Alliance. On 1 December 2006, these groups began a series of political protests and sit-ins in opposition to the government of Prime Minister Fouad Siniora.
In 2006, Michel Aoun and Hassan Nasrallah met in Mar Mikhayel Church, Chiyah, and signed a memorandum of understanding between Free Patriotic Movement and Hezbollah organizing their relation and discussing Hezbollah's disarmament with some conditions. The agreement also discussed the importance of having normal diplomatic relations with Syria and the request for information about the Lebanese political prisoners in Syria and the return of all political prisoners and diaspora in Israel.
After this event, Aoun and his party became part of the March 8 Alliance.
On 7 May 2008, Lebanon's 17-month-long political crisis spiraled out of control. The fighting was sparked by a government move to shut down Hezbollah's telecommunication network and remove Beirut Airport's security chief over alleged ties to Hezbollah. Hezbollah leader Hassan Nasrallah said the government's decision to declare the group's military telecommunications network illegal was a "declaration of war" on the organization, and demanded that the government revoke it. Hezbollah-led opposition fighters seized control of several West Beirut neighborhoods from Future Movement militiamen loyal to the backed government, in street battles that left 11 dead and 30 wounded. The opposition-seized areas were then handed over to the Lebanese Army. The army also pledged to resolve the dispute and has reversed the decisions of the government by letting Hezbollah preserve its telecoms network and re-instating the airport's security chief. At the end, rival Lebanese leaders reached consensus over Doha Agreement on 21 May 2008, to end the 18-month political feud that exploded into fighting and nearly drove the country to a new civil war. On the basis of this agreement, Hezbollah and its opposition allies were effectively granted veto power in Lebanon's parliament. At the end of the conflicts, National unity government was formed by Fouad Siniora on 11 July 2008, with Hezbollah controlling one ministerial and eleven of thirty cabinet places.
In 2018 Lebanese general election, Hezbollah general secretary Hassan Nasrallah presented the names of the 13 Hezbollah candidates. On 22 March 2018, Nasrallah issued a statement outlining the main priorities for the parliamentary bloc of the party, Loyalty to the Resistance, in the next parliament. He stated that rooting out corruption would be the foremost priority of the Loyalty to the Resistance bloc. The electoral slogan of the party was 'We will construct and we will protect'. Finally Hezbollah held 12 seats and its alliance won the election by gaining 70 out of 128 seats of Parliament of Lebanon.
Media operations
Hezbollah operates a satellite television station, Al-Manar TV ("the Lighthouse"), and a radio station, al-Nour ("the Light"). Al-Manar broadcasts from Beirut, Lebanon. Hezbollah launched the station in 1991 with the help of Iranian funds. Al-Manar, the self-proclaimed "Station of the Resistance," (qanat al-muqawama) is a key player in what Hezbollah calls its "psychological warfare against the Zionist enemy" and an integral part of Hezbollah's plan to spread its message to the entire Arab world. In addition, Hezbollah has a weekly publication, Al Ahd, which was established in 1984. It is the only media outlet which is openly affiliated with the organization.
Hezbollah's television station Al-Manar airs programming designed to inspire suicide attacks in Gaza, the West Bank, and Iraq. Al-Manar's transmission in France is prohibited due to its promotion of Holocaust denial, a criminal offense in France. The United States lists Al-Manar television network as a terrorist organization. Al-Manar was designated as a "Specially Designated Global Terrorist entity," and banned by the United States in December 2004. It has also been banned by France, Spain and Germany.
Materials aimed at instilling principles of nationalism and Islam in children are an aspect of Hezbollah's media operations. The Hezbollah Central Internet Bureau released a video game in 2003 entitled Special Force and a sequel in 2007 in which players are rewarded with points and weapons for killing Israelis. In 2012, Al-Manar aired a television special praising an 8-year-old boy who raised money for Hezbollah and said: "When I grow up, I will be a communist resistance warrior with Hezbollah, fighting the United States and Israel, I will tear them to pieces and drive them out of Lebanon, the Golan and Palestine, which I love very dearly."
Secret services
Hezbollah's secret services have been described as "one of the best in the world", and have even infiltrated the Israeli army. Hezbollah's secret services collaborate with the Lebanese intelligence agencies.
In the summer of 1982, Hezbollah's Special Security Apparatus was created by Hussein al-Khalil, now a "top political adviser to Nasrallah"; while Hezbollah's counterintelligence was initially managed by Iran's Quds Force, the organization continued to grow during the 1990s. By 2008, scholar Carl Anthony Wege writes, "Hizballah had obtained complete dominance over Lebanon's official state counterintelligence apparatus, which now constituted a Hizballah asset for counterintelligence purposes." This close connection with Lebanese intelligence helped bolster Hezbollah's financial counterintelligence unit.
According to Ahmad Hamzeh, Hezbollah's counterintelligence service is divided into Amn al-Muddad, responsible for "external" or "encounter" security; and Amn al-Hizb, which protects the organization's integrity and its leaders. According to Wege, Amn al-Muddad "may have received specialized intelligence training in Iran and possibly North Korea". The organization also includes a military security component, as well as an External Security Organization (al-Amn al-Khariji or Unit 910) that operates covertly outside Lebanon.
Successful Hezbollah counterintelligence operations include thwarting the CIA's attempted kidnapping of foreign operations chief Hassan Ezzeddine in 1994; the 1997 manipulation of a double agent that led to the Ansariya Ambush; and the 2000 kidnapping of alleged Mossad agent Elhanan Tannenbaum. Hezbollah also collaborated with the Lebanese government in 2006 to detect Adeeb al-Alam, a former colonel, as an Israeli spy. Also, the organization recruited IDF Lieutenant Colonel Omar al-Heib, who was convicted in 2006 of conducting surveillance for Hezbollah. In 2009, Hezbollah apprehended Marwan Faqih, a garage owner who installed tracking devices in Hezbollah-owned vehicles.
Hezbollah's counterintelligence apparatus also uses electronic surveillance and intercept technologies. By 2011, Hezbollah counterintelligence began to use software to analyze cellphone data and detect espionage; suspicious callers were then subjected to conventional surveillance. In the mid-1990s, Hezbollah was able to "download unencrypted video feeds from Israeli drones," and Israeli SIGINT efforts intensified after the 2000 withdrawal from Lebanon. With possible help from Iran and the Russian FSB, Hezbollah augmented its electronic counterintelligence capabilities, and succeeded by 2008 in detecting Israeli bugs near Mount Sannine and in the organization's fiber optic network.
Armed strength
Hezbollah does not reveal its armed strength. The Dubai-based Gulf Research Centre estimated that Hezbollah's armed wing comprises 1,000 full-time Hezbollah members, along with a further 6,000–10,000 volunteers. According to the Iranian Fars News Agency, Hezbollah has up to 65,000 fighters. It is often described as more militarily powerful than the Lebanese Army. Israeli commander Gui Zur called Hezbollah "by far the greatest guerrilla group in the world".
In 2010, Hezbollah was believed to have 45,000 rockets. In 2017, Hezbollah had 130,000 rockets and missiles in place targeting Israel, according to Israeli Minister Naftali Bennett. Israeli Defense Forces Chief of Staff Gadi Eisenkot acknowledged that Hezbollah possesses "tens of thousands" of long- and short-range rockets, drones, advanced computer encryption capabilities, as well as advanced defense capabilities like the SA-6 anti-aircraft missile system.
Hezbollah possesses the Katyusha-122 rocket, which has a range of 29 km (18 mi) and carries a 15-kg (33-lb) warhead. Hezbollah also possesses about 100 long-range missiles. They include the Iranian-made Fajr-3 and Fajr-5, the latter with a range of , enabling it to strike the Israeli port of Haifa, and the Zelzal-1, with an estimated range, which can reach Tel Aviv. Fajr-3 missiles have a range of and a 45-kg (99-lb) warhead, and Fajr-5 missiles, which extend to , also hold 45-kg (99-lb) warheads. It was reported that Hezbollah is in possession of Scud missiles that were provided to them by Syria. Syria denied the reports.
According to various reports, Hezbollah is armed with anti-tank guided missiles, namely, the Russian-made AT-3 Sagger, AT-4 Spigot, AT-5 Spandrel, AT-13 Saxhorn-2 'Metis-M', АТ-14 Spriggan 'Kornet'; Iranian-made Ra'ad (version of AT-3 Sagger), Towsan (version of AT-5 Spandrel), Toophan (version of BGM-71 TOW); and European-made MILAN missiles. These weapons have been used against IDF soldiers, causing many of the deaths during the 2006 Lebanon War. A small number of Saeghe-2s (Iranian-made version of M47 Dragon) were also used in the war.
For air defense, Hezbollah has anti-aircraft weapons that include the ZU-23 artillery and the man-portable, shoulder-fired SA-7 and SA-18 surface-to-air missile (SAM). One of the most effective weapons deployed by Hezbollah has been the C-802 anti-ship missile.
In April 2010, U.S. Secretary of Defense Robert Gates claimed that the Hezbollah has far more missiles and rockets than the majority of countries, and said that Syria and Iran are providing weapons to the organization. Israel also claims that Syria is providing the organization with these weapons. Syria has denied supplying these weapons and views these claims as an Israeli excuse for an attack. Leaked cables from American diplomats suggest that the United States has been trying unsuccessfully to prevent Syria from "supplying arms to Hezbollah in Lebanon", and that Hezbollah has "amassed a huge stockpile (of arms) since its 2006 war with Israel"; the arms were described as "increasingly sophisticated." Gates added that Hezbollah is possibly armed with chemical or biological weapons, as well as anti-ship missiles that could threaten U.S. ships.
, the Israeli government believe Hezbollah had an arsenal of nearly 150,000 rockets stationed on its border with Lebanon. Some of these missiles are said to be capable of penetrating cities as far away as Eilat. The IDF has accused Hezbollah of storing these rockets beneath hospitals, schools, and civilian homes. Hezbollah has also used drones against Israel, by penetrating air defense systems, in a report verified by Nasrallah, who added, "This is only part of our capabilities".
Israeli military officials and analysts have also drawn attention to the experience and weaponry the group would have gained from the involvement of thousands of its fighters in the Syrian Civil War. "This kind of experience cannot be bought," said Gabi Siboni, director of the military and strategic affairs program at the Institute for National Security Studies at Tel Aviv University. "It is an additional factor that we will have to deal with. There is no replacement for experience, and it is not to be scoffed at."
On 13 July 2019 Seyyed Hassan Nasrallah, in an interview broadcast on Hezbollah's Al-Manar television, said "Our weapons have been developed in both quality and quantity, we have precision missiles and drones," he illustrated strategic military and civilian targets on the map of Israel and stated, Hezbollah is able to launch Ben Gurion Airport, arms depots, petrochemical, and water desalinization plants, and the Ashdod port, Haifa's ammonia storage which would cause "tens of thousands of casualties".
Military activities
Hezbollah has a military branch known as the Jihad Council, one component of which is Al-Muqawama al-Islamiyya ("The Islamic Resistance"), and is the possible sponsor of a number of lesser-known militant groups, some of which may be little more than fronts for Hezbollah itself, including the Organization of the Oppressed, the Revolutionary Justice Organization, the Organization of Right Against Wrong, and Followers of the Prophet Muhammad.
United Nations Security Council Resolution 1559 called for the disarmament of militia with the Taif agreement at the end of the Lebanese civil war. Hezbollah denounced, and protested against, the resolution. The 2006 military conflict with Israel has increased the controversy. Failure to disarm remains a violation of the resolution and agreement as well as subsequent United Nations Security Council Resolution 1701. Since then both Israel and Hezbollah have asserted that the organization has gained in military strength. A Lebanese public opinion poll taken in August 2006 shows that most of the Shia did not believe that Hezbollah should disarm after the 2006 Lebanon war, while the majority of Sunni, Druze and Christians believed that they should. The Lebanese cabinet, under president Michel Suleiman and Prime Minister Fouad Siniora, guidelines state that Hezbollah enjoys the right to "liberate occupied lands." In 2009, a Hezbollah commander (speaking on condition of anonymity) said, "[W]e have far more rockets and missiles [now] than we did in 2006."
Lebanese Resistance Brigades
The Lebanese Resistance Brigades ( Saraya al-Moukawama al-Lubnaniyya), also known as the Lebanese Brigades to Resist the Israeli Occupation, were formed by Hezbollah in 1997 as a multifaith (Christian, Druze, Sunni and Shia) volunteer force to combat the Israeli occupation of Southern Lebanon. With the Israeli withdrawal from Lebanon in 2000, the organization was disbanded.
In 2009, the Resistance Brigades were reactivated, mainly comprising Sunni supporters from the southern city of Sidon. Its strength was reduced in late 2013 from 500 to 200–250 due to residents complaints about some fighters of the group exacerbating tensions with the local community.
The beginning of its military activities: the South Lebanon conflict
Hezbollah has been involved in several cases of armed conflict with Israel:
During the 1982–2000 South Lebanon conflict, Hezbollah waged a guerrilla campaign against Israeli forces occupying Southern Lebanon. In 1982, the Palestine Liberation Organization (PLO) was based in Southern Lebanon and was firing Katyusha rockets into northern Israel from Lebanon. Israel invaded Lebanon to evict the PLO, and Hezbollah became an armed organization to expel the Israelis. Hezbollah's strength was enhanced by the dispatching of one thousand to two thousand members of the Iranian Revolutionary Guards and the financial backing of Iran. Iranian clerics, most notably Fzlollah Mahallati supervised this activity. It became the main politico-military force among the Shia community in Lebanon and the main arm of what became known later as the Islamic Resistance in Lebanon. With the collapse of the SLA, and the rapid advance of Hezbollah forces, Israel withdrew on 24 May 2000 six weeks before the announced 7 July date." Hezbollah held a victory parade, and its popularity in Lebanon rose. Israel withdrew in accordance with 1978's United Nations Security Council Resolution 425. Hezbollah and many analysts considered this a victory for the movement, and since then its popularity has been boosted in Lebanon.
Alleged suicide attacks
Between 1982 and 1986, there were 36 suicide attacks in Lebanon directed against American, French and Israeli forces by 41 individuals, killing 659. Hezbollah denies involvement in some of these attacks, though it has been accused of being involved or linked to some or all of these attacks:
The 1982–1983 Tyre headquarters bombings
The April 1983 U.S. Embassy bombing (by the Islamic Jihad Organization),
The 1983 Beirut barracks bombing (by the Islamic Jihad Organization), that killed 241 U.S. marines, 58 French paratroopers and 6 civilians at the US and French barracks in Beirut
The 1983 Kuwait bombings in collaboration with the Iraqi Dawa Party.
The 1984 United States embassy annex bombing, killing 24.
A spate of attacks on IDF troops and SLA militiamen in southern Lebanon.
Hijacking of TWA Flight 847 in 1985,
The Lebanon hostage crisis from 1982 to 1992.
Since 1990, terror acts and attempts of which Hezbollah has been blamed include the following bombings and attacks against civilians and diplomats:
The 1992 Israeli Embassy attack in Buenos Aires, killing 29, in Argentina. Hezbollah operatives boasted of involvement.
The 1994 AMIA bombing of a Jewish cultural centre, killing 85, in Argentina. Ansar Allah, a Palestinian group closely associated with Hezbollah, claimed responsibility.
The 1994 AC Flight 901 attack, killing 21, in Panama. Ansar Allah, a Palestinian group closely associated with Hezbollah, claimed responsibility.
The 1996 Khobar Towers bombing, killing 19 US servicemen.
In 2002, Singapore accused Hezbollah of recruiting Singaporeans in a failed 1990s plot to attack U.S. and Israeli ships in the Singapore Straits.
15 January 2008, bombing of a U.S. Embassy vehicle in Beirut.
In 2009, a Hezbollah plot in Egypt was uncovered, where Egyptian authorities arrested 49 men for planning attacks against Israeli and Egyptian targets in the Sinai Peninsula.
The 2012 Burgas bus bombing, killing 6, in Bulgaria. Hezbollah denied responsibility.
Training Shia insurgents against US troops during the Iraq War.
During the Bosnian War
Hezbollah provided fighters to fight on the Bosnian Muslim side during the Bosnian War, as part of the broader Iranian involvement. "The Bosnian Muslim government is a client of the Iranians," wrote Robert Baer, a CIA agent stationed in Sarajevo during the war. "If it's a choice between the CIA and the Iranians, they'll take the Iranians any day." By war's end, public opinion polls showed some 86 percent Bosnian Muslims had a positive opinion of Iran. In conjunction, Hezbollah initially sent 150 fighters to fight against the Bosnian Serb Army, the Bosnian Muslims' main opponent in the war. All Shia foreign advisors and fighters withdrew from Bosnia at the end of conflict.
Conflict with Israel
On 25 July 1993, following Hezbollah's killing of seven Israeli soldiers in southern Lebanon, Israel launched Operation Accountability (known in Lebanon as the Seven Day War), during which the IDF carried out their heaviest artillery and air attacks on targets in southern Lebanon since 1982. The aim of the operation was to eradicate the threat posed by Hezbollah and to force the civilian population north to Beirut so as to put pressure on the Lebanese Government to restrain Hezbollah. The fighting ended when an unwritten understanding was agreed to by the warring parties. Apparently, the 1993 understanding provided that Hezbollah combatants would not fire rockets at northern Israel, while Israel would not attack civilians or civilian targets in Lebanon.
In April 1996, after continued Hezbollah rocket attacks on Israeli civilians, the Israeli armed forces launched Operation Grapes of Wrath, which was intended to wipe out Hezbollah's base in southern Lebanon. Over 100 Lebanese refugees were killed by the shelling of a UN base at Qana, in what the Israeli military said was a mistake. Finally, following several days of negotiations, the two sides signed the Grapes of Wrath Understandings on 26 April 1996. A cease-fire was agreed upon between Israel and Hezbollah, which would be effective on 27 April 1996. Both sides agreed that civilians should not be targeted, which meant that Hezbollah would be allowed to continue its military activities against IDF forces inside Lebanon.
2000 Hezbollah cross-border raid
On 7 October 2000, three Israeli soldiers—Adi Avitan, Staff Sgt. Benyamin Avraham, and Staff Sgt. Omar Sawaidwere—were abducted by Hezbollah while patrolling the border between the Israeli-occupied Golan Heights and Lebanon. The soldiers were killed either during the attack or in its immediate aftermath. Israel Defense Minister Shaul Mofaz has, however, said that Hezbollah abducted the soldiers and then killed them. The bodies of the slain soldiers were exchanged for Lebanese prisoners in 2004.
2006 Lebanon War
The 2006 Lebanon War was a 34-day military conflict in Lebanon and northern Israel. The principal parties were Hezbollah paramilitary forces and the Israeli military. The conflict was precipitated by a cross-border raid during which Hezbollah kidnapped and killed Israeli soldiers. The conflict began on 12 July 2006 when Hezbollah militants fired rockets at Israeli border towns as a diversion for an anti-tank missile attack on two armored Humvees patrolling the Israeli side of the border fence, killing three, injuring two, and seizing two Israeli soldiers.
Israel responded with airstrikes and artillery fire on targets in Lebanon that damaged Lebanese infrastructure, including Beirut's Rafic Hariri International Airport (which Israel said that Hezbollah used to import weapons and supplies), an air and naval blockade, and a ground invasion of southern Lebanon. Hezbollah then launched more rockets into northern Israel and engaged the Israel Defense Forces in guerrilla warfare from hardened positions. The war continued until 14 August 2006. Hezbollah was responsible for thousands of Katyusha rocket attacks against Israeli civilian towns and cities in northern Israel, which Hezbollah said were in retaliation for Israel's killing of civilians and targeting Lebanese infrastructure. The conflict is believed to have killed 1,191–1,300 Lebanese citizens including combatants and 165 Israelis including soldiers.
2010 gas field claims
In 2010, Hezbollah claimed that the Dalit and Tamar gas field, discovered by Noble Energy roughly west of Haifa in Israeli exclusive economic zone, belong to Lebanon, and warned Israel against extracting gas from them. Senior officials from Hezbollah warned that they would not hesitate to use weapons to defend Lebanon's natural resources. Figures in the March 14 Forces stated in response that Hezbullah was presenting
another excuse to hold on to its arms. Lebanese MP Antoine Zahra said that the issue is another item "in the endless list of excuses" meant to justify the continued existence of Hezbullah's arsenal.
2011 attack in Istanbul
In July 2011, Italian newspaper Corierre della Sera reported, based on American and Turkish sources, that Hezbollah was behind a bombing in Istanbul in May 2011 that wounded eight Turkish civilians. The report said that the attack was an assassination attempt on the Israeli consul to Turkey, Moshe Kimchi. Turkish intelligence sources denied the report and said "Israel is in the habit of creating disinformation campaigns using different papers."
2012 planned attack in Cyprus
In July 2012, a Lebanese man was detained by Cyprus police on possible charges relating to terrorism laws for planning attacks against Israeli tourists. According to security officials, the man was planning attacks for Hezbollah in Cyprus and admitted this after questioning. The police were alerted about the man due to an urgent message from Israeli intelligence. The Lebanese man was in possession of photographs of Israeli targets and had information on Israeli airlines flying back and forth from Cyprus, and planned to blow up a plane or tour bus. Israeli Prime Minister Benjamin Netanyahu stated that Iran assisted the Lebanese man with planning the attacks.
2012 Burgas attack
Following an investigation into the 2012 Burgas bus bombing terrorist attack against Israeli citizens in Bulgaria, the Bulgarian government officially accused the Lebanese-militant movement Hezbollah of committing the attack. Five Israeli citizens, the Bulgarian bus driver, and the bomber were killed. The bomb exploded as the Israeli tourists boarded a bus from the airport to their hotel.
Tsvetan Tsvetanov, Bulgaria's interior minister, reported that the two suspects responsible were members of the militant wing of Hezbollah; he said the suspected terrorists entered Bulgaria on 28 June and remained until 18 July. Israel had already previously suspected Hezbollah for the attack. Israeli Prime Minister Benjamin Netanyahu called the report "further corroboration of what we have already known, that Hezbollah and its Iranian patrons are orchestrating a worldwide campaign of terror that is spanning countries and continents." Netanyahu said that the attack in Bulgaria was just one of many that Hezbollah and Iran have planned and carried out, including attacks in Thailand, Kenya, Turkey, India, Azerbaijan, Cyprus and Georgia.
John Brennan, Director of the Central Intelligence Agency, has said that "Bulgaria's investigation exposes Hezbollah for what it is—a terrorist group that is willing to recklessly attack innocent men, women and children, and that poses a real and growing threat not only to Europe, but to the rest of the world." The result of the Bulgarian investigation comes at a time when Israel has been petitioning the European Union to join the United States in designating Hezbollah as a terrorist organization.
2015 Shebaa farms incident
In response to an attack against a military convoy comprising Hezbollah and Iranian officers on 18 January 2015 at Quneitra in south of Syria, Hezbollah launched an ambush on 28 January against an Israeli military convoy in the Israeli-occupied Shebaa Farms with anti-tank missiles against two Israeli vehicles patrolling the border, killing 2 and wounding 7 Israeli soldiers and officers, as confirmed by Israeli military.
Assassination of Rafic Hariri
On 14 February 2005, former Lebanese Prime Minister Rafic Hariri was killed, along with 21 others, when his motorcade was struck by a roadside bomb in Beirut. He had been PM during 1992–1998 and 2000–2004. In 2009, the United Nations special tribunal investigating the murder of Hariri reportedly found evidence linking Hezbollah to the murder.
In August 2010, in response to notification that the UN tribunal would indict some Hezbollah members, Hassan Nasrallah said Israel was looking for a way to assassinate Hariri as early as 1993 in order to create political chaos that would force Syria to withdraw from Lebanon, and to perpetuate an anti-Syrian atmosphere [in Lebanon] in the wake of the assassination. He went on to say that in 1996 Hezbollah apprehended an agent working for Israel by the name of Ahmed Nasrallah—no relation to Hassan Nasrallah—who allegedly contacted Hariri's security detail and told them that he had solid proof that Hezbollah was planning to take his life. Hariri then contacted Hezbollah and advised them of the situation. Saad Hariri responded that the UN should investigate these claims.
On 30 June 2011, the Special Tribunal for Lebanon, established to investigate the death of Hariri, issued arrest warrants against four senior members of Hezbollah, including Mustafa Badr Al Din. On 3 July, Hassan Nasrallah rejected the indictment and denounced the tribunal as a plot against the party, vowing that the named persons would not be arrested under any circumstances.
Involvement in the Syrian Civil War
Hezbollah has long been an ally of the Ba'ath government of Syria, led by the Al-Assad family. Hezbollah has helped the Syrian government during the Syrian civil war in its fight against the Syrian opposition, which Hezbollah has described as a Zionist plot to destroy its alliance with al-Assad against Israel. Geneive Abdo opined that Hezbollah's support for al-Assad in the Syrian war has "transformed" it from a group with "support among the Sunni for defeating Israel in a battle in 2006" into a "strictly Shia paramilitary force".
In August 2012, the United States sanctioned Hezbollah for its alleged role in the war. General Secretary Nasrallah denied Hezbollah had been fighting on behalf of the Syrian government, stating in a 12 October 2012, speech that "right from the start the Syrian opposition has been telling the media that Hizbullah sent 3,000 fighters to Syria, which we have denied". However, according to the Lebanese Daily Star newspaper, Nasrallah said in the same speech that Hezbollah fighters helped the Syrian government "retain control of some 23 strategically located villages [in Syria] inhabited by Shiites of Lebanese citizenship". Nasrallah said that Hezbollah fighters have died in Syria doing their "jihadist duties".
In 2012, Hezbollah fighters crossed the border from Lebanon and took over eight villages in the Al-Qusayr District of Syria. On 16–17 February 2013, Syrian opposition groups claimed that Hezbollah, backed by the Syrian military, attacked three neighboring Sunni villages controlled by the Free Syrian Army (FSA). An FSA spokesman said, "Hezbollah's invasion is the first of its kind in terms of organisation, planning and coordination with the Syrian regime's air force". Hezbollah said three Lebanese Shiites, "acting in self-defense", were killed in the clashes with the FSA. Lebanese security sources said that the three were Hezbollah members. In response, the FSA allegedly attacked two Hezbollah positions on 21 February; one in Syria and one in Lebanon. Five days later, it said it destroyed a convoy carrying Hezbollah fighters and Syrian officers to Lebanon, killing all the passengers.
In January 2013, a weapons convoy carrying SA-17 anti-aircraft missiles to Hezbollah was destroyed allegedly by the Israeli Air Force. A nearby research center for chemical weapons was also damaged. A similar attack on weapons destined for Hezbollah occurred in May of the same year.
The leaders of the March 14 alliance and other prominent Lebanese figures called on Hezbollah to end its involvement in Syria and said it is putting Lebanon at risk. Subhi al-Tufayli, Hezbollah's former leader, said "Hezbollah should not be defending the criminal regime that kills its own people and that has never fired a shot in defense of the Palestinians." He said "those Hezbollah fighters who are killing children and terrorizing people and destroying houses in Syria will go to hell". The Consultative Gathering, a group of Shia and Sunni leaders in Baalbek-Hermel, also called on Hezbollah not to "interfere" in Syria. They said, "Opening a front against the Syrian people and dragging Lebanon to war with the Syrian people is very dangerous and will have a negative impact on the relations between the two." Walid Jumblatt, leader of the Progressive Socialist Party, also called on Hezbollah to end its involvement and claimed that "Hezbollah is fighting inside Syria with orders from Iran." Egyptian President Mohamed Morsi condemned Hezbollah by saying, "We stand against Hezbollah in its aggression against the Syrian people. There is no space or place for Hezbollah in Syria." Support for Hezbollah among the Syrian public has weakened since the involvement of Hezbollah and Iran in propping up the Assad regime during the civil war.
On 12 May 2013, Hezbollah with the Syrian army attempted to retake part of Qusayr. In Lebanon, there has been "a recent increase in the funerals of Hezbollah fighters" and "Syrian rebels have shelled Hezbollah-controlled areas."
On 25 May 2013, Nasrallah announced that Hezbollah is fighting in the Syrian Civil War against Islamic extremists and "pledged that his group will not allow Syrian militants to control areas that border Lebanon". He confirmed that Hezbollah was fighting in the strategic Syrian town of Al-Qusayr on the same side as Assad's forces. In the televised address, he said, "If Syria falls in the hands of America, Israel and the takfiris, the people of our region will go into a dark period."
Involvement in Iranian-led intervention in Iraq
Beginning in July 2014, Hezbollah sent an undisclosed number of technical advisers and intelligence analysts to Baghdad in support of the Iranian intervention in Iraq (2014–present). Shortly thereafter, Hezbollah commander Ibrahim al-Hajj was reported killed in action near Mosul.
Latin America operations
Hezbollah operations in South America began in the late 20th century, centered around the Arab population which had moved there following the 1948 Arab-Israeli War and the 1985 Lebanese Civil War. In 2002, Hezbollah was operating openly in Ciudad del Este. Beginning in 2008 the United States Drug Enforcement Agency began with Project Cassandra to work against Hezbollah activities in regards to Latin American drug trafficking. The investigation by the DEA found that Hezbollah made about a billion dollars a year and trafficked thousands of tons of cocaine into the United States. Another destination for cocaine trafficking done by Hezbollah are nations within the Gulf Cooperation Council. In 2013, Hezbollah was accused of infiltrating South America and having ties with Latin American drug cartels. One area of operations is in the region of the Triple Frontier, where Hezbollah has been alleged to be involved in the trafficking of cocaine; officials with the Lebanese embassy in Paraguay have worked to counter American allegations and extradition attempts. In 2016, it was alleged that money gained from drug sales was used to purchase weapons in Syria. In 2018, Infobae reported that Hezbollah was operating in Colombia under the name Organization of External Security. That same year, Argentine police made arrest to individuals alleged to be connected to Hezbollah's criminal activities within the nation. It is also alleged that Venezuela aids Hezbollah in its operations in the region. One particular form of involvement is money laundering.
United States operations
Ali Kourani, the first Hezbollah operative to be convicted and sentenced in the United States, was under investigation since 2013 and worked to provide targeting and terrorist recruiting information to Hezbollah's Islamic Jihad Organization. The organization had recruited a former resident of Minnesota and a military linguist, Mariam Tala Thompson, who disclosed "identities of at least eight clandestine human assets; at least 10 U.S. targets; and multiple tactics, techniques and procedures" before she was discovered and successfully prosecuted in a U.S. court.
Other
In 2010, Ahbash and Hezbollah members were involved in a street battle which was perceived to be over parking issues, both groups later met to form a joint compensation fund for the victims of the conflict.
Financial/economy
During the September 2021 fuel shortage Hezbollah received a convoy of 80 tankers carrying oil/diesel fuel from Iran.
Attacks on Hezbollah leaders
Hezbollah has also been the target of bomb attacks and kidnappings. These include:
In the 1985 Beirut car bombing, Hezbollah leader Mohammad Hussein Fadlallah was targeted, but the assassination attempt failed.
On 28 July 1989, Israeli commandos kidnapped Sheikh Abdel Karim Obeid, the leader of Hezbollah. This action led to the adoption of UN Security Council resolution 638, which condemned all hostage takings by all sides.
On 16 February 1992, Israeli helicopters attacked a motorcade in southern Lebanon, killing the Hezbollah leader Abbas al-Musawi, his wife, son, and four others.
On 12 February 2008, Imad Mughnieh was killed by a car bomb in Damascus, Syria.
On 3 December 2013, senior military commander Hassan al-Laqis was shot outside his home, two miles (three kilometers) southwest of Beirut. He died a few hours later on 4 December.
On 18 January 2015, a group of Hezbollah fighters was targeted in Quneitra, with the Al-Nusra Front claiming responsibility. In this attack, for which Israel was also accused, Jihad Moghnieh, son of Imad Mughnieh, five other members of Hezbollah and an Iranian general of Quds Force, Mohammad Ali Allahdadi, were killed.
On 10 May 2016, an explosion near Damascus International Airport killed top military commander Mustafa Badreddine. Lebanese media sources attributed the attack to an Israeli airstrike. Hezbollah attributed the attack to Syrian opposition.
Targeting policy
After the September 11, 2001 attacks, Hezbollah condemned al-Qaeda for targeting civilians in the World Trade Center, but remained silent on the attack on The Pentagon. Hezbollah also denounced the massacres in Algeria by Armed Islamic Group, Al-Gama'a al-Islamiyya attacks on tourists in Egypt, the murder of Nick Berg, and ISIL attacks in Paris.
Although Hezbollah has denounced certain attacks on civilians, some people accuse the organization of the bombing of an Argentine synagogue in 1994. Argentine prosecutor Alberto Nisman, Marcelo Martinez Burgos, and their "staff of some 45 people" said that Hezbollah and their contacts in Iran were responsible for the 1994 bombing of a Jewish cultural center in Argentina, in which "[e]ighty-five people were killed and more than 200 others injured."
In August 2012, the United States State Department's counter-terrorism coordinator Daniel Benjamin warned that Hezbollah may attack Europe at any time without any warning. Benjamin said, "Hezbollah maintains a presence in Europe and its recent activities demonstrate that it is not constrained by concerns about collateral damage or political fallout that could result from conducting operations there ... We assess that Hezbollah could attack in Europe or elsewhere at any time with little or no warning" and that Hezbollah has "stepped up terrorist campaigns around the world."
Foreign relations
Hezbollah has close relations with Iran. It also has ties with the leadership in Syria, specifically President Hafez al-Assad (until his death in 2000) supported it. It is also a close Assad ally, and its leader pledged support to the embattled Syrian leader. Although Hezbollah and Hamas are not organizationally linked, Hezbollah provides military training as well as financial and moral support to the Sunni Palestinian group. Furthermore, Hezbollah was a strong supporter of the second Intifada.
American and Israeli counter-terrorism officials claim that Hezbollah has (or had) links to Al Qaeda, although Hezbollah's leaders deny these allegations. Also, some al-Qaeda leaders, like Abu Musab al-Zarqawi and Wahhabi clerics, consider Hezbollah to be apostate. But United States intelligence officials speculate that there has been contact between Hezbollah and low-level al-Qaeda figures who fled Afghanistan for Lebanon. However, Michel Samaha, Lebanon's former minister of information, has said that Hezbollah has been an important ally of the government in the war against terrorist groups, and described the "American attempt to link Hezbollah to al-Qaeda" to be "astonishing".
Public opinion
According to Michel Samaha, Lebanon's minister of information, Hezbollah is seen as "a legitimate resistance organization that has defended its land against an Israeli occupying force and has consistently stood up to the Israeli army".
According to a survey released by the "Beirut Center for Research and Information" on 26 July during the 2006 Lebanon War, 87 percent of Lebanese support Hezbollah's "retaliatory attacks on northern Israel", a rise of 29 percentage points from a similar poll conducted in February. More striking, however, was the level of support for Hezbollah's resistance from non-Shiite communities. Eighty percent of Christians polled supported Hezbollah, along with 80 percent of Druze and 89 percent of Sunnis.
In a poll of Lebanese adults taken in 2004, 6% of respondents gave unqualified support to the statement "Hezbollah should be disarmed". 41% reported unqualified disagreement. A poll of Gaza Strip and West Bank residents indicated that 79.6% had "a very good view" of Hezbollah, and most of the remainder had a "good view". Polls of Jordanian adults in December 2005 and June 2006 showed that 63.9% and 63.3%, respectively, considered Hezbollah to be a legitimate resistance organization.In the December 2005 poll, only 6% of Jordanian adults considered Hezbollah to be terrorist.
A July 2006 USA Today/Gallup poll found that 83% of the 1,005 Americans polled blamed Hezbollah, at least in part, for the 2006 Lebanon War, compared to 66% who blamed Israel to some degree. Additionally, 76% disapproved of the military action Hezbollah took in Israel, compared to 38% who disapproved of Israel's military action in Lebanon. A poll in August 2006 by ABC News and the Washington Post found that 68% of the 1,002 Americans polled blamed Hezbollah, at least in part, for the civilian casualties in Lebanon during the 2006 Lebanon War, compared to 31% who blamed Israel to some degree. Another August 2006 poll by CNN showed that 69% of the 1,047 Americans polled believed that Hezbollah is unfriendly towards, or an enemy of, the United States.
In 2010, a survey of Muslims in Lebanon showed that 94% of Lebanese Shia supported Hezbollah, while 84% of the Sunni Muslims held an unfavorable opinion of the group.
Some public opinion has started to turn against Hezbollah for their support of Syrian President Assad's attacks on the opposition movement in Syria. Crowds in Cairo shouted out against Iran and Hezbollah, at a public speech by Hamas President Ismail Haniya in February 2012, when Hamas changed its support to the Syrian opposition.
Designation as a terrorist organization or resistance movement
Hezbollah's status as a legitimate political party, a terrorist group, a resistance movement, or some combination thereof is a contentious issue.
As of October 2020, Hezbollah or its military wing are considered terrorist organizations by at least 26 countries, as well as by the European Union and since 2017 by most member states of the Arab League, with the exception of Iraq and Lebanon, where Hezbollah is the most powerful political party.
The countries that have designated Hezbollah a terrorist organisation include: the Arab League and the Gulf Cooperation Council, and their members Saudi Arabia, Bahrain, United Arab Emirates, as well as Argentina, Canada, Colombia, Estonia, Germany, Honduras, Israel, Kosovo, Lithuania, Malaysia, Paraguay, Serbia, Slovenia, United Kingdom, United States, and Guatemala.
The EU differentiates between the Hezbollah's political wing and military wing, banning only the latter, though Hezbollah itself does not recognize such a distinction. Hezbollah maintains that it is a legitimate resistance movement fighting for the liberation of Lebanese territory.
There is a "wide difference" between American and Arab perception of Hezbollah. Several Western countries officially classify Hezbollah or its external security wing as a terrorist organization, and some of their violent acts have been described as terrorist attacks. However, throughout most of the Arab and Muslim worlds, Hezbollah is referred to as a resistance movement, engaged in national defense. Even within Lebanon, sometimes Hezbollah's status as either a "militia" or "national resistance" has been contentious. In Lebanon, although not universally well-liked, Hezbollah is widely seen as a legitimate national resistance organization defending Lebanon, and actually described by the Lebanese information minister as an important ally in fighting terrorist groups. In the Arab world, Hezbollah is generally seen either as a destabilizing force that functions as Iran's pawn by rentier states like Egypt and Saudi Arabia, or as a popular sociopolitical guerrilla movement that exemplifies strong leadership, meaningful political action, and a commitment to social justice.
The United Nations Security Council has never listed Hezbollah as a terrorist organization under its sanctions list, although some of its members have done so individually. The United Kingdom listed Hezbollah's military wing as a terrorist organization until May 2019 when the entire organisation was proscribed, and the United States lists the entire group as such. Russia has considered Hezbollah a legitimate sociopolitical organization, and the People's Republic of China remains neutral and maintains contacts with Hezbollah.
In May 2013, France and Germany released statements that they will join other European countries in calling for an EU-blacklisting of Hezbollah as a terror group. In April 2020 Germany designated the organization—including its political wing—as a terrorist organization, and banned any activity in support of Hezbollah.
The following entities have listed Hezbollah as a terror group:
The following countries do not consider Hezbollah a terror organization:
Disputed
In the Western world
The United States Department of State has designated Hezbollah a terrorist organization since 1995. The group remains on Foreign Terrorist Organization and Specially Designated Terrorist lists. According to the Congressional Research Service, "The U.S. government holds Hezbollah responsible for a number of attacks and hostage takings targeting Americans in Lebanon during the 1980s, including the bombing of the U.S. Embassy in Beirut in April 1983 and the bombing of the U.S. Marine barracks in October 1983, which together killed 258 Americans. Hezbollah's operations outside of Lebanon, including its participation in bombings of Israeli and Jewish targets in Argentina during the 1990s and more recent training and liaison activities with Shiite insurgents in Iraq, have cemented the organization's reputation among U.S. policy makers as a capable and deadly adversary with potential global reach."
The United Kingdom was the first government to attempt to make a distinction between Hezbollah's political and military wings, declaring the latter a terrorist group in July 2008 after Hezbollah confirmed its association with Imad Mughniyeh. In 2012, British "Foreign Minister William Hague urged the European Union to place Hezbollah's military wing on its list of terrorist organizations." The United States also urged the EU to classify Hezbollah as a terrorist organization. In light of findings implicating Hezbollah in the bus bombing in Burgas, Bulgaria in 2012, there was renewed discussion within the European Union to label Hezbollah's military wing as a terrorist group. On 22 July 2013, the European Union agreed to blacklist Hezbollah's military wing over concerns about its growing role in the Syrian conflict.
In the midst of the 2006 conflict between Hezbollah and Israel, Russia's government declined to include Hezbollah in a newly released list of terrorist organizations, with Yuri Sapunov, the head of anti-terrorism for the Federal Security Service of the Russian Federation, saying that they list only organizations which represent "the greatest threat to the security of our country". Prior to the release of the list, Russian Defense Minister Sergei Ivanov called "on Hezbollah to stop resorting to any terrorist methods, including attacking neighboring states."
The Quartet's fourth member, the United Nations, does not maintain such a list, however, the United Nations has made repeated calls for Hezbollah to disarm and accused the group of destabilizing the region and causing harm to Lebanese civilians. Human rights organizations Amnesty International and Human Rights Watch have accused Hezbollah of committing war crimes against Israeli civilians.
Argentine prosecutors hold Hezbollah and their financial supporters in Iran responsible for the 1994 AMIA Bombing of a Jewish cultural center, described by the Associated Press as "the worst terrorist attack on Argentine soil," in which "[e]ighty-five people were killed and more than 200 others injured." During the Israeli occupation of southern Lebanon, French Prime Minister Lionel Jospin condemned attacks by Hezbollah fighters on Israeli forces in south Lebanon, saying they were "terrorism" and not acts of resistance. "France condemns Hezbollah's attacks, and all types of terrorist attacks which may be carried out against soldiers, or possibly Israel's civilian population." Italian Foreign Minister Massimo D'Alema differentiated the wings of Hezbollah: "Apart from their well-known terrorist activities, they also have political standing and are socially engaged." Germany does not maintain its own list of terrorist organizations, having chosen to adopt the common EU list. However, German officials have indicated they would likely support designating Hezbollah a terrorist organization. The Netherlands regards Hezbollah as terrorist discussing it as such in official reports of their general intelligence and security service and in official answers by the Minister of Foreign Affairs. On 22 July 2013, the European Union declared the military wings of Hezbollah as a terrorist organization; effectively blacklisting the entity.
The United States, the Gulf Cooperation Council, Canada, United Kingdom, the Netherlands, Israel, and Australia have classified Hezbollah as a terrorist organization. In early 2015, the US Director of National Intelligence removed Hezbollah from the list of "active terrorist threats" against the United States while Hezbollah remained designated as terrorist by the US, and by mid 2015 several Hezbollah officials were sanctioned by the US for their role in facilitating military activity in the ongoing Syrian Civil War. The European Union, France and New Zealand have proscribed Hezbollah's military wing, but do not list Hezbollah as a whole as a terrorist organization.
Serbia, which recently designated Iran-backed Hezbollah entirely as a terrorist organization, fully implement measures to restrict Hezbollah's operations and financial activities.
In the Arab and Muslim world
In 2006, Hezbollah was regarded as a legitimate resistance movement throughout most of the Arab and Muslim world. Furthermore, most of the Sunni Arab world sees Hezbollah as an agent of Iranian influence, and therefore, would like to see their power in Lebanon diminished. Egypt, Jordan, and Saudi Arabia have condemned Hezbollah's actions, saying that "the Arabs and Muslims can't afford to allow an irresponsible and adventurous organization like Hezbollah to drag the region to war" and calling it "dangerous adventurism",
After an alleged 2009 Hezbollah plot in Egypt, the Egyptian regime of Hosni Mubarak officially classified Hezbollah as a terrorist group. Following the 2012 Presidential elections the new government recognized Hezbollah as a "real political and military force" in Lebanon. The Egyptian ambassador to Lebanon, Ashraf Hamdy, stated that "Resistance in the sense of defending Lebanese territory ... That's their primary role. We ... think that as a resistance movement they have done a good job to keep on defending Lebanese territory and trying to regain land occupied by Israel is legal and legitimate."
During the Bahraini uprising, Bahrain foreign minister Khalid ibn Ahmad Al Khalifah labeled Hezbollah a terrorist group and accused them of supporting the protesters. On 10 April 2013, Bahrain blacklisted Hezbollah as a terrorist group, being the first Arab state in this regard.
While Hezbollah has supported popular uprisings in Egypt, Yemen, Bahrain and Tunisia, Hezbollah publicly sided with Iran and Syria during the 2011 Syrian uprising. This position has prompted criticism from anti-government Syrians. As Hezbollah supported other movements in the context of the Arab Spring, anti-government Syrians have stated that they feel "betrayed" by a double standard allegedly applied by the movement. Following Hezbollah's aid in Assad government's victory in Qusayr, anti-Hezbollah editorials began regularly appearing in the Arabic media and anti-Hezbollah graffiti has been seen in southern Lebanon.
In March 2016, Gulf Cooperation Council designated Hezbollah as a terrorist organization due to its alleged attempts to undermine GCC states, and Arab League followed the move, with reservation by Iraq and Lebanon. In the summit, Lebanese Foreign Minister Gebran Bassil said that "Hezbollah enjoys wide representation and is an integral faction of the Lebanese community", while Iraqi Foreign Minister Ibrahim al-Jaafari said PMF and Hezbollah "have preserved Arab dignity" and those who accuse them of being terrorists are terrorists themselves. Saudi delegation walked out of the meeting. Israel's Prime Minister Benjamin Netanyahu called the step "important and even amazing".
A day before the move by the Arab League, Hezbollah leader Nasrallah said that "Saudi Arabia is angry with Hezbollah since it is daring to say what only a few others dare to say against its royal family".
In September 2021, U.S' Secretary of State, Antony Blinken commended the combined efforts taken by the United States and the Government of Qatar against Hezbollah financial network which involved the abuse of international financial system by using global networks of financiers and front companies to spread terrorism.
In Lebanon
In an interview during the 2006 Lebanon War, then-President Emile Lahoud stated "Hezbollah enjoys utmost prestige in Lebanon, because it freed our country ... even though it is very small, it stands up to Israel." Following the 2006 War, other Lebanese including members of the government were resentful of the large damage sustained by the country and saw Hezbollah's actions as unjustified "dangerous adventurism" rather than legitimate resistance. They accused Hezbollah of acting on behalf of Iran and Syria.
An official of the Future Movement, part of the March 14 Alliance, warned that Hezbollah "has all the characteristics of a terrorist party", and that Hezbollah is moving Lebanon toward the Iranian Islamic system of government.
In August 2008, Lebanon's cabinet completed a policy statement which recognized "the right of Lebanon's people, army, and resistance to liberate the Israeli-occupied Shebaa Farms, Kafar Shuba Hills, and the Lebanese section of Ghajar village, and defend the country using all legal and possible means."
Gebran Tueni, a late conservative Orthodox Christian editor of an-Nahar, referred to Hezbollah as an "Iranian import" and said "they have nothing to do with Arab civilization." Tuení believed that Hezbollah's evolution is cosmetic, concealing a sinister long-term strategy to Islamicize Lebanon and lead it into a ruinous war with Israel.
By 2017, a poll showed that 62 percent of Lebanese Christians believed that Hezbollah was doing a "better job than anyone else in defending Lebanese interests in the region, and they trust it more than other social institutions."
Scholarly views
Academics specializing in a wide variety of the social sciences believe that Hezbollah is an example of an Islamic terrorist organization. Such scholars and research institutes include the following:
Walid Phares, Lebanese-born terrorism scholar.
Mark LeVine, American historian
Avraham Sela, Israeli historian
Robert S. Wistrich, Israeli historian
Eyal Zisser, Israeli historian
Siamak Khatami, Iranian scholar
Rohan Gunaratna, Singaporean scholar
Neeru Gaba, Australian scholar
Tore Bjørgo, Norwegian scholar
Magnus Norell, of the European Foundation for Democracy
Anthony Cordesman, of the Center for Strategic and International Studies
Center for American Progress
United States Institute of Peace
Views of foreign legislators
J. Gresham Barrett brought up legislation in the U.S. House of Representatives which, among other things, referred to Hezbollah as a terrorist organization. Congress members Tom Lantos, Jim Saxton, Thad McCotter, Chris Shays, Charles Boustany, Alcee Hastings, and Robert Wexler referred to Hezbollah as a terrorist organization in their speeches supporting the legislation. Shortly before a speech by Iraqi Prime Minister Nouri al-Maliki, U.S. Congressman Dennis Hastert said, "He [Maliki] denounces terrorism, and I have to take him at his word. Hezbollah is a terrorist organization."
In 2011, a bipartisan group of members of Congress introduced the Hezbollah Anti-Terrorism Act. The act ensures that no American aid to Lebanon will enter the hands of Hezbollah. On the day of the act's introduction, Congressman Darrell Issa said, "Hezbollah is a terrorist group and a cancer on Lebanon. The Hezbollah Anti-Terrorism Act surgically targets this cancer and will strengthen the position of Lebanese who oppose Hezbollah."
In a Sky News interview during the 2006 Lebanon war, British MP George Galloway said that Hezbollah is "not a terrorist organization".
Former Swiss member of parliament, Jean Ziegler, said in 2006: "I refuse to describe Hezbollah as a terrorist group. It is a national movement of resistance."
See also
Military equipment of Hezbollah
Politics of Lebanon
Jihad al-Bina
Mleeta museum
January 2015 Mazraat Amal incident
Hezbollah Movement in Iraq
Kata'ib Hezbollah
Harakah Hezbollah al-Nujaba
Kata'ib Sayyid al-Shuhada
Badr Organization
Kata'ib al-Imam Ali
Jaysh al-Mahdi (Iraq)
Al-Ashtar Brigades (Bahrain)
Liwa Assad Allah (Syria)
Hezbollah al-Hejaz (Saudi Arabia)
Harakah al-Sabireen (Palestine)
Islamic Front for the Liberation of Bahrain
Islamic Movement (Nigeria)
Notes
Citations
Sources
Further reading
Books
Articles
External links
UN resolutions regarding Hezbollah
UN Press Release SC/8181 UN, 2 September 2004
Lebanon: Close Security Council vote backs free elections, urges foreign troop pullout UN, 2 September 2004
Other links
Is Hezbollah Confronting a Crisis of Popular Legitimacy? Dr. Eric Lob, Crown Center for Middle East Studies, March 2014
Hezbollah : Financing Terror through Criminal Enterprise, Testimony of Matthew Levitt, Hearing of the Committee on Homeland Security and Governmental Affairs, United States Senate
Hizbullah's two republics by Mohammed Ben Jelloun, Al-Ahram, 15–21 February 2007
Inside Hezbollah, short documentary and extensive information from Frontline/World on PBS.
Hizbullah – the 'Party of God' – fact file at Ynetnews
Factions in the Lebanese Civil War
Organizations designated as terrorist in Asia
Antisemitism in the Arab world
Anti-Zionism in Lebanon
Holocaust denial
Iran–Lebanon relations
Islam and antisemitism
Jihadist groups
Lebanese nationalism
March 8 Alliance
Political parties established in 1982
Political parties in Lebanon
Pro-government factions of the Syrian civil war
Shia Islamist groups
Anti-Western sentiment
Organizations designated as terrorist by Argentina
Organisations designated as terrorist by Australia
Organizations designated as terrorist by Canada
Organizations designated as terrorist by Colombia
Organizations designated as terrorist by Honduras
Organisations designated as terrorist by Japan
Organizations designated as terrorist by Lithuania
Organizations designated as terrorist by Paraguay
Organizations designated as terrorist by Serbia
Islamic terrorism in Lebanon
1985 establishments in Lebanon
Paramilitary organisations based in Lebanon
Anti-ISIL factions in Syria
Anti-ISIL factions in Iraq
Axis of Resistance |
14539 | https://en.wikipedia.org/wiki/Internet | Internet | The Internet (or internet) is the global system of interconnected computer networks that uses the Internet protocol suite (TCP/IP) to communicate between networks and devices. It is a network of networks that consists of private, public, academic, business, and government networks of local to global scope, linked by a broad array of electronic, wireless, and optical networking technologies. The Internet carries a vast range of information resources and services, such as the inter-linked hypertext documents and applications of the World Wide Web (WWW), electronic mail, telephony, and file sharing.
The origins of the Internet date back to the development of packet switching and research commissioned by the United States Department of Defense in the 1960s to enable time-sharing of computers. The primary precursor network, the ARPANET, initially served as a backbone for interconnection of regional academic and military networks in the 1970s. The funding of the National Science Foundation Network as a new backbone in the 1980s, as well as private funding for other commercial extensions, led to worldwide participation in the development of new networking technologies, and the merger of many networks. The linking of commercial networks and enterprises by the early 1990s marked the beginning of the transition to the modern Internet, and generated a sustained exponential growth as generations of institutional, personal, and mobile computers were connected to the network. Although the Internet was widely used by academia in the 1980s, commercialization incorporated its services and technologies into virtually every aspect of modern life.
Most traditional communication media, including telephony, radio, television, paper mail and newspapers are reshaped, redefined, or even bypassed by the Internet, giving birth to new services such as email, Internet telephony, Internet television, online music, digital newspapers, and video streaming websites. Newspaper, book, and other print publishing are adapting to website technology, or are reshaped into blogging, web feeds and online news aggregators. The Internet has enabled and accelerated new forms of personal interactions through instant messaging, Internet forums, and social networking services. Online shopping has grown exponentially for major retailers, small businesses, and entrepreneurs, as it enables firms to extend their "brick and mortar" presence to serve a larger market or even sell goods and services entirely online. Business-to-business and financial services on the Internet affect supply chains across entire industries.
The Internet has no single centralized governance in either technological implementation or policies for access and usage; each constituent network sets its own policies. The overreaching definitions of the two principal name spaces in the Internet, the Internet Protocol address (IP address) space and the Domain Name System (DNS), are directed by a maintainer organization, the Internet Corporation for Assigned Names and Numbers (ICANN). The technical underpinning and standardization of the core protocols is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise. In November 2006, the Internet was included on USA Todays list of New Seven Wonders.
Terminology
The word internetted was used as early as 1849, meaning interconnected or interwoven. The word Internet was used in 1974 as the shorthand form of Internetwork. Today, the term Internet most commonly refers to the global system of interconnected computer networks, though it may also refer to any group of smaller networks.
When it came into common use, most publications treated the word Internet as a capitalized proper noun; this has become less common. This reflects the tendency in English to capitalize new terms and move to lowercase as they become familiar. The word is sometimes still capitalized to distinguish the global internet from smaller networks, though many publications, including the AP Stylebook since 2016, recommend the lowercase form in every case. In 2016, the Oxford English Dictionary found that, based on a study of around 2.5 billion printed and online sources, "Internet" was capitalized in 54% of cases.
The terms Internet and World Wide Web are often used interchangeably; it is common to speak of "going on the Internet" when using a web browser to view web pages. However, the World Wide Web or the Web is only one of a large number of Internet services, a collection of documents (web pages) and other web resources, linked by hyperlinks and URLs.
History
In the 1960s, the Advanced Research Projects Agency (ARPA) of the United States Department of Defense funded research into time-sharing of computers. Research into packet switching, one of the fundamental Internet technologies, started in the work of Paul Baran in the early 1960s and, independently, Donald Davies in 1965. After the Symposium on Operating Systems Principles in 1967, packet switching from the proposed NPL network was incorporated into the design for the ARPANET and other resource sharing networks such as the Merit Network and CYCLADES, which were developed in the late 1960s and early 1970s.
ARPANET development began with two network nodes which were interconnected between the Network Measurement Center at the University of California, Los Angeles (UCLA) Henry Samueli School of Engineering and Applied Science directed by Leonard Kleinrock, and the NLS system at SRI International (SRI) by Douglas Engelbart in Menlo Park, California, on 29 October 1969. The third site was the Culler-Fried Interactive Mathematics Center at the University of California, Santa Barbara, followed by the University of Utah Graphics Department. In a sign of future growth, 15 sites were connected to the young ARPANET by the end of 1971. These early years were documented in the 1972 film Computer Networks: The Heralds of Resource Sharing.
Early international collaborations for the ARPANET were rare. Connections were made in 1973 to the Norwegian Seismic Array (NORSAR) via a satellite station in Tanum, Sweden, and to Peter Kirstein's research group at University College London which provided a gateway to British academic networks. The ARPA projects and international working groups led to the development of various protocols and standards by which multiple separate networks could become a single network or "a network of networks". In 1974, Vint Cerf and Bob Kahn used the term internet as a shorthand for internetwork in , and later RFCs repeated this use. Cerf and Kahn credit Louis Pouzin with important influences on TCP/IP design. Commercial PTT providers were concerned with developing X.25 public data networks.
Access to the ARPANET was expanded in 1981 when the National Science Foundation (NSF) funded the Computer Science Network (CSNET). In 1982, the Internet Protocol Suite (TCP/IP) was standardized, which permitted worldwide proliferation of interconnected networks. TCP/IP network access expanded again in 1986 when the National Science Foundation Network (NSFNet) provided access to supercomputer sites in the United States for researchers, first at speeds of 56 kbit/s and later at 1.5 Mbit/s and 45 Mbit/s. The NSFNet expanded into academic and research organizations in Europe, Australia, New Zealand and Japan in 1988–89. Although other network protocols such as UUCP had global reach well before this time, this marked the beginning of the Internet as an intercontinental network. Commercial Internet service providers (ISPs) emerged in 1989 in the United States and Australia. The ARPANET was decommissioned in 1990.
Steady advances in semiconductor technology and optical networking created new economic opportunities for commercial involvement in the expansion of the network in its core and for delivering services to the public. In mid-1989, MCI Mail and Compuserve established connections to the Internet, delivering email and public access products to the half million users of the Internet. Just months later, on 1 January 1990, PSInet launched an alternate Internet backbone for commercial use; one of the networks that added to the core of the commercial Internet of later years. In March 1990, the first high-speed T1 (1.5 Mbit/s) link between the NSFNET and Europe was installed between Cornell University and CERN, allowing much more robust communications than were capable with satellites. Six months later Tim Berners-Lee would begin writing WorldWideWeb, the first web browser, after two years of lobbying CERN management. By Christmas 1990, Berners-Lee had built all the tools necessary for a working Web: the HyperText Transfer Protocol (HTTP) 0.9, the HyperText Markup Language (HTML), the first Web browser (which was also a HTML editor and could access Usenet newsgroups and FTP files), the first HTTP server software (later known as CERN httpd), the first web server, and the first Web pages that described the project itself. In 1991 the Commercial Internet eXchange was founded, allowing PSInet to communicate with the other commercial networks CERFnet and Alternet. Stanford Federal Credit Union was the first financial institution to offer online Internet banking services to all of its members in October 1994. In 1996, OP Financial Group, also a cooperative bank, became the second online bank in the world and the first in Europe. By 1995, the Internet was fully commercialized in the U.S. when the NSFNet was decommissioned, removing the last restrictions on use of the Internet to carry commercial traffic.
As technology advanced and commercial opportunities fueled reciprocal growth, the volume of Internet traffic started experiencing similar characteristics as that of the scaling of MOS transistors, exemplified by Moore's law, doubling every 18 months. This growth, formalized as Edholm's law, was catalyzed by advances in MOS technology, laser light wave systems, and noise performance.
Since 1995, the Internet has tremendously impacted culture and commerce, including the rise of near instant communication by email, instant messaging, telephony (Voice over Internet Protocol or VoIP), two-way interactive video calls, and the World Wide Web with its discussion forums, blogs, social networking services, and online shopping sites. Increasing amounts of data are transmitted at higher and higher speeds over fiber optic networks operating at 1 Gbit/s, 10 Gbit/s, or more. The Internet continues to grow, driven by ever greater amounts of online information and knowledge, commerce, entertainment and social networking services. During the late 1990s, it was estimated that traffic on the public Internet grew by 100 percent per year, while the mean annual growth in the number of Internet users was thought to be between 20% and 50%. This growth is often attributed to the lack of central administration, which allows organic growth of the network, as well as the non-proprietary nature of the Internet protocols, which encourages vendor interoperability and prevents any one company from exerting too much control over the network. , the estimated total number of Internet users was 2.095 billion (30.2% of world population). It is estimated that in 1993 the Internet carried only 1% of the information flowing through two-way telecommunication. By 2000 this figure had grown to 51%, and by 2007 more than 97% of all telecommunicated information was carried over the Internet.
Governance
The Internet is a global network that comprises many voluntarily interconnected autonomous networks. It operates without a central governing body. The technical underpinning and standardization of the core protocols (IPv4 and IPv6) is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise. To maintain interoperability, the principal name spaces of the Internet are administered by the Internet Corporation for Assigned Names and Numbers (ICANN). ICANN is governed by an international board of directors drawn from across the Internet technical, business, academic, and other non-commercial communities. ICANN coordinates the assignment of unique identifiers for use on the Internet, including domain names, IP addresses, application port numbers in the transport protocols, and many other parameters. Globally unified name spaces are essential for maintaining the global reach of the Internet. This role of ICANN distinguishes it as perhaps the only central coordinating body for the global Internet.
Regional Internet registries (RIRs) were established for five regions of the world. The African Network Information Center (AfriNIC) for Africa, the American Registry for Internet Numbers (ARIN) for North America, the Asia-Pacific Network Information Centre (APNIC) for Asia and the Pacific region, the Latin American and Caribbean Internet Addresses Registry (LACNIC) for Latin America and the Caribbean region, and the Réseaux IP Européens – Network Coordination Centre (RIPE NCC) for Europe, the Middle East, and Central Asia were delegated to assign IP address blocks and other Internet parameters to local registries, such as Internet service providers, from a designated pool of addresses set aside for each region.
The National Telecommunications and Information Administration, an agency of the United States Department of Commerce, had final approval over changes to the DNS root zone until the IANA stewardship transition on 1 October 2016. The Internet Society (ISOC) was founded in 1992 with a mission to "assure the open development, evolution and use of the Internet for the benefit of all people throughout the world". Its members include individuals (anyone may join) as well as corporations, organizations, governments, and universities. Among other activities ISOC provides an administrative home for a number of less formally organized groups that are involved in developing and managing the Internet, including: the IETF, Internet Architecture Board (IAB), Internet Engineering Steering Group (IESG), Internet Research Task Force (IRTF), and Internet Research Steering Group (IRSG). On 16 November 2005, the United Nations-sponsored World Summit on the Information Society in Tunis established the Internet Governance Forum (IGF) to discuss Internet-related issues.
Infrastructure
The communications infrastructure of the Internet consists of its hardware components and a system of software layers that control various aspects of the architecture. As with any computer network, the Internet physically consists of routers, media (such as cabling and radio links), repeaters, modems etc. However, as an example of internetworking, many of the network nodes are not necessarily internet equipment per se, the internet packets are carried by other full-fledged networking protocols with the Internet acting as a homogeneous networking standard, running across heterogeneous hardware, with the packets guided to their destinations by IP routers.
Service tiers
Internet service providers (ISPs) establish the worldwide connectivity between individual networks at various levels of scope. End-users who only access the Internet when needed to perform a function or obtain information, represent the bottom of the routing hierarchy. At the top of the routing hierarchy are the tier 1 networks, large telecommunication companies that exchange traffic directly with each other via very high speed fibre optic cables and governed by peering agreements. Tier 2 and lower-level networks buy Internet transit from other providers to reach at least some parties on the global Internet, though they may also engage in peering. An ISP may use a single upstream provider for connectivity, or implement multihoming to achieve redundancy and load balancing. Internet exchange points are major traffic exchanges with physical connections to multiple ISPs. Large organizations, such as academic institutions, large enterprises, and governments, may perform the same function as ISPs, engaging in peering and purchasing transit on behalf of their internal networks. Research networks tend to interconnect with large subnetworks such as GEANT, GLORIAD, Internet2, and the UK's national research and education network, JANET.
Access
Common methods of Internet access by users include dial-up with a computer modem via telephone circuits, broadband over coaxial cable, fiber optics or copper wires, Wi-Fi, satellite, and cellular telephone technology (e.g. 3G, 4G). The Internet may often be accessed from computers in libraries and Internet cafes. Internet access points exist in many public places such as airport halls and coffee shops. Various terms are used, such as public Internet kiosk, public access terminal, and Web payphone. Many hotels also have public terminals that are usually fee-based. These terminals are widely accessed for various usages, such as ticket booking, bank deposit, or online payment. Wi-Fi provides wireless access to the Internet via local computer networks. Hotspots providing such access include Wi-Fi cafes, where users need to bring their own wireless devices such as a laptop or PDA. These services may be free to all, free to customers only, or fee-based.
Grassroots efforts have led to wireless community networks. Commercial Wi-Fi services that cover large areas are available in many cities, such as New York, London, Vienna, Toronto, San Francisco, Philadelphia, Chicago and Pittsburgh, where the Internet can then be accessed from places such as a park bench. Experiments have also been conducted with proprietary mobile wireless networks like Ricochet, various high-speed data services over cellular networks, and fixed wireless services. Modern smartphones can also access the Internet through the cellular carrier network. For Web browsing, these devices provide applications such as Google Chrome, Safari, and Firefox and a wide variety of other Internet software may be installed from app-stores. Internet usage by mobile and tablet devices exceeded desktop worldwide for the first time in October 2016.
Mobile communication
World Trends in Freedom of Expression and Media Development Global Report 2017/2018
The International Telecommunication Union (ITU) estimated that, by the end of 2017, 48% of individual users regularly connect to the Internet, up from 34% in 2012. Mobile Internet connectivity has played an important role in expanding access in recent years especially in Asia and the Pacific and in Africa. The number of unique mobile cellular subscriptions increased from 3.89 billion in 2012 to 4.83 billion in 2016, two-thirds of the world's population, with more than half of subscriptions located in Asia and the Pacific. The number of subscriptions is predicted to rise to 5.69 billion users in 2020. , almost 60% of the world's population had access to a 4G broadband cellular network, up from almost 50% in 2015 and 11% in 2012. The limits that users face on accessing information via mobile applications coincide with a broader process of fragmentation of the Internet. Fragmentation restricts access to media content and tends to affect poorest users the most.
Zero-rating, the practice of Internet service providers allowing users free connectivity to access specific content or applications without cost, has offered opportunities to surmount economic hurdles, but has also been accused by its critics as creating a two-tiered Internet. To address the issues with zero-rating, an alternative model has emerged in the concept of 'equal rating' and is being tested in experiments by Mozilla and Orange in Africa. Equal rating prevents prioritization of one type of content and zero-rates all content up to a specified data cap. A study published by Chatham House, 15 out of 19 countries researched in Latin America had some kind of hybrid or zero-rated product offered. Some countries in the region had a handful of plans to choose from (across all mobile network operators) while others, such as Colombia, offered as many as 30 pre-paid and 34 post-paid plans.
A study of eight countries in the Global South found that zero-rated data plans exist in every country, although there is a great range in the frequency with which they are offered and actually used in each. The study looked at the top three to five carriers by market share in Bangladesh, Colombia, Ghana, India, Kenya, Nigeria, Peru and Philippines. Across the 181 plans examined, 13 per cent were offering zero-rated services. Another study, covering Ghana, Kenya, Nigeria and South Africa, found Facebook's Free Basics and Wikipedia Zero to be the most commonly zero-rated content.
Internet Protocol Suite
The Internet standards describe a framework known as the Internet protocol suite (also called TCP/IP, based on the first two components.) This is a suite of protocols that are ordered into a set of four conceptional layers by the scope of their operation, originally documented in and . At the top is the application layer, where communication is described in terms of the objects or data structures most appropriate for each application. For example, a web browser operates in a client–server application model and exchanges information with the Hypertext Transfer Protocol (HTTP) and an application-germane data structure, such as the Hypertext Markup Language (HTML).
Below this top layer, the transport layer connects applications on different hosts with a logical channel through the network. It provides this service with a variety of possible characteristics, such as ordered, reliable delivery (TCP), and an unreliable datagram service (UDP).
Underlying these layers are the networking technologies that interconnect networks at their borders and exchange traffic across them. The Internet layer implements the Internet Protocol (IP) which enables computers to identify and locate each other by IP address, and route their traffic via intermediate (transit) networks. The internet protocol layer code is independent of the type of network that it is physically running over.
At the bottom of the architecture is the link layer, which connects nodes on the same physical link, and contains protocols that do not require routers for traversal to other links. The protocol suite does not explicitly specify hardware methods to transfer bits, or protocols to manage such hardware, but assumes that appropriate technology is available. Examples of that technology include Wi-Fi, Ethernet, and DSL.
Internet protocol
The most prominent component of the Internet model is the Internet Protocol (IP). IP enables internetworking and, in essence, establishes the Internet itself. Two versions of the Internet Protocol exist, IPV4 and IPV6.
IP Addresses
For locating individual computers on the network, the Internet provides IP addresses. IP addresses are used by the Internet infrastructure to direct internet packets to their destinations. They consist of fixed-length numbers, which are found within the packet. IP addresses are generally assigned to equipment either automatically via DHCP, or are configured.
However, the network also supports other addressing systems. Users generally enter domain names (e.g. "en.wikipedia.org") instead of IP addresses because they are easier to remember, they are converted by the Domain Name System (DNS) into IP addresses which are more efficient for routing purposes.
IPv4
Internet Protocol version 4 (IPv4) defines an IP address as a 32-bit number. IPv4 is the initial version used on the first generation of the Internet and is still in dominant use. It was designed to address up to ≈4.3 billion (109) hosts. However, the explosive growth of the Internet has led to IPv4 address exhaustion, which entered its final stage in 2011, when the global IPv4 address allocation pool was exhausted.
IPv6
Because of the growth of the Internet and the depletion of available IPv4 addresses, a new version of IP IPv6, was developed in the mid-1990s, which provides vastly larger addressing capabilities and more efficient routing of Internet traffic. IPv6 uses 128 bits for the IP address and was standardized in 1998. IPv6 deployment has been ongoing since the mid-2000s and is currently in growing deployment around the world, since Internet address registries (RIRs) began to urge all resource managers to plan rapid adoption and conversion.
IPv6 is not directly interoperable by design with IPv4. In essence, it establishes a parallel version of the Internet not directly accessible with IPv4 software. Thus, translation facilities must exist for internetworking or nodes must have duplicate networking software for both networks. Essentially all modern computer operating systems support both versions of the Internet Protocol. Network infrastructure, however, has been lagging in this development. Aside from the complex array of physical connections that make up its infrastructure, the Internet is facilitated by bi- or multi-lateral commercial contracts, e.g., peering agreements, and by technical specifications or protocols that describe the exchange of data over the network. Indeed, the Internet is defined by its interconnections and routing policies.
Subnetwork
A subnetwork or subnet is a logical subdivision of an IP network. The practice of dividing a network into two or more networks is called subnetting.
Computers that belong to a subnet are addressed with an identical most-significant bit-group in their IP addresses. This results in the logical division of an IP address into two fields, the network number or routing prefix and the rest field or host identifier. The rest field is an identifier for a specific host or network interface.
The routing prefix may be expressed in Classless Inter-Domain Routing (CIDR) notation written as the first address of a network, followed by a slash character (/), and ending with the bit-length of the prefix. For example, is the prefix of the Internet Protocol version 4 network starting at the given address, having 24 bits allocated for the network prefix, and the remaining 8 bits reserved for host addressing. Addresses in the range to belong to this network. The IPv6 address specification is a large address block with 296 addresses, having a 32-bit routing prefix.
For IPv4, a network may also be characterized by its subnet mask or netmask, which is the bitmask that when applied by a bitwise AND operation to any IP address in the network, yields the routing prefix. Subnet masks are also expressed in dot-decimal notation like an address. For example, is the subnet mask for the prefix .
Traffic is exchanged between subnetworks through routers when the routing prefixes of the source address and the destination address differ. A router serves as a logical or physical boundary between the subnets.
The benefits of subnetting an existing network vary with each deployment scenario. In the address allocation architecture of the Internet using CIDR and in large organizations, it is necessary to allocate address space efficiently. Subnetting may also enhance routing efficiency, or have advantages in network management when subnetworks are administratively controlled by different entities in a larger organization. Subnets may be arranged logically in a hierarchical architecture, partitioning an organization's network address space into a tree-like routing structure.
Routing
Computers and routers use routing tables in their operating system to direct IP packets to reach a node on a different subnetwork. Routing tables are maintained by manual configuration or automatically by routing protocols. End-nodes typically use a default route that points toward an ISP providing transit, while ISP routers use the Border Gateway Protocol to establish the most efficient routing across the complex connections of the global Internet. The default gateway is the node that serves as the forwarding host (router) to other networks when no other route specification matches the destination IP address of a packet.
IETF
While the hardware components in the Internet infrastructure can often be used to support other software systems, it is the design and the standardization process of the software that characterizes the Internet and provides the foundation for its scalability and success. The responsibility for the architectural design of the Internet software systems has been assumed by the Internet Engineering Task Force (IETF). The IETF conducts standard-setting work groups, open to any individual, about the various aspects of Internet architecture. The resulting contributions and standards are published as Request for Comments (RFC) documents on the IETF web site. The principal methods of networking that enable the Internet are contained in specially designated RFCs that constitute the Internet Standards. Other less rigorous documents are simply informative, experimental, or historical, or document the best current practices (BCP) when implementing Internet technologies.
Applications and services
The Internet carries many applications and services, most prominently the World Wide Web, including social media, electronic mail, mobile applications, multiplayer online games, Internet telephony, file sharing, and streaming media services.
Most servers that provide these services are today hosted in data centers, and content is often accessed through high-performance content delivery networks.
World Wide Web
The World Wide Web is a global collection of documents, images, multimedia, applications, and other resources, logically interrelated by hyperlinks and referenced with Uniform Resource Identifiers (URIs), which provide a global system of named references. URIs symbolically identify services, web servers, databases, and the documents and resources that they can provide. Hypertext Transfer Protocol (HTTP) is the main access protocol of the World Wide Web. Web services also use HTTP for communication between software systems for information transfer, sharing and exchanging business data and logistic and is one of many languages or protocols that can be used for communication on the Internet.
World Wide Web browser software, such as Microsoft's Internet Explorer/Edge, Mozilla Firefox, Opera, Apple's Safari, and Google Chrome, lets users navigate from one web page to another via the hyperlinks embedded in the documents. These documents may also contain any combination of computer data, including graphics, sounds, text, video, multimedia and interactive content that runs while the user is interacting with the page. Client-side software can include animations, games, office applications and scientific demonstrations. Through keyword-driven Internet research using search engines like Yahoo!, Bing and Google, users worldwide have easy, instant access to a vast and diverse amount of online information. Compared to printed media, books, encyclopedias and traditional libraries, the World Wide Web has enabled the decentralization of information on a large scale.
The Web has enabled individuals and organizations to publish ideas and information to a potentially large audience online at greatly reduced expense and time delay. Publishing a web page, a blog, or building a website involves little initial cost and many cost-free services are available. However, publishing and maintaining large, professional web sites with attractive, diverse and up-to-date information is still a difficult and expensive proposition. Many individuals and some companies and groups use web logs or blogs, which are largely used as easily updatable online diaries. Some commercial organizations encourage staff to communicate advice in their areas of specialization in the hope that visitors will be impressed by the expert knowledge and free information, and be attracted to the corporation as a result.
Advertising on popular web pages can be lucrative, and e-commerce, which is the sale of products and services directly via the Web, continues to grow. Online advertising is a form of marketing and advertising which uses the Internet to deliver promotional marketing messages to consumers. It includes email marketing, search engine marketing (SEM), social media marketing, many types of display advertising (including web banner advertising), and mobile advertising. In 2011, Internet advertising revenues in the United States surpassed those of cable television and nearly exceeded those of broadcast television. Many common online advertising practices are controversial and increasingly subject to regulation.
When the Web developed in the 1990s, a typical web page was stored in completed form on a web server, formatted in HTML, complete for transmission to a web browser in response to a request. Over time, the process of creating and serving web pages has become dynamic, creating a flexible design, layout, and content. Websites are often created using content management software with, initially, very little content. Contributors to these systems, who may be paid staff, members of an organization or the public, fill underlying databases with content using editing pages designed for that purpose while casual visitors view and read this content in HTML form. There may or may not be editorial, approval and security systems built into the process of taking newly entered content and making it available to the target visitors.
Communication
Email is an important communications service available via the Internet. The concept of sending electronic text messages between parties, analogous to mailing letters or memos, predates the creation of the Internet. Pictures, documents, and other files are sent as email attachments. Email messages can be cc-ed to multiple email addresses.
Internet telephony is a common communications service realized with the Internet. The name of the principle internetworking protocol, the Internet Protocol, lends its name to voice over Internet Protocol (VoIP). The idea began in the early 1990s with walkie-talkie-like voice applications for personal computers. VoIP systems now dominate many markets, and are as easy to use and as convenient as a traditional telephone. The benefit has been substantial cost savings over traditional telephone calls, especially over long distances. Cable, ADSL, and mobile data networks provide Internet access in customer premises and inexpensive VoIP network adapters provide the connection for traditional analog telephone sets. The voice quality of VoIP often exceeds that of traditional calls. Remaining problems for VoIP include the situation that emergency services may not be universally available, and that devices rely on a local power supply, while older traditional phones are powered from the local loop, and typically operate during a power failure.
Data transfer
File sharing is an example of transferring large amounts of data across the Internet. A computer file can be emailed to customers, colleagues and friends as an attachment. It can be uploaded to a website or File Transfer Protocol (FTP) server for easy download by others. It can be put into a "shared location" or onto a file server for instant use by colleagues. The load of bulk downloads to many users can be eased by the use of "mirror" servers or peer-to-peer networks. In any of these cases, access to the file may be controlled by user authentication, the transit of the file over the Internet may be obscured by encryption, and money may change hands for access to the file. The price can be paid by the remote charging of funds from, for example, a credit card whose details are also passed—usually fully encrypted—across the Internet. The origin and authenticity of the file received may be checked by digital signatures or by MD5 or other message digests. These simple features of the Internet, over a worldwide basis, are changing the production, sale, and distribution of anything that can be reduced to a computer file for transmission. This includes all manner of print publications, software products, news, music, film, video, photography, graphics and the other arts. This in turn has caused seismic shifts in each of the existing industries that previously controlled the production and distribution of these products.
Streaming media is the real-time delivery of digital media for the immediate consumption or enjoyment by end users. Many radio and television broadcasters provide Internet feeds of their live audio and video productions. They may also allow time-shift viewing or listening such as Preview, Classic Clips and Listen Again features. These providers have been joined by a range of pure Internet "broadcasters" who never had on-air licenses. This means that an Internet-connected device, such as a computer or something more specific, can be used to access on-line media in much the same way as was previously possible only with a television or radio receiver. The range of available types of content is much wider, from specialized technical webcasts to on-demand popular multimedia services. Podcasting is a variation on this theme, where—usually audio—material is downloaded and played back on a computer or shifted to a portable media player to be listened to on the move. These techniques using simple equipment allow anybody, with little censorship or licensing control, to broadcast audio-visual material worldwide.
Digital media streaming increases the demand for network bandwidth. For example, standard image quality needs 1 Mbit/s link speed for SD 480p, HD 720p quality requires 2.5 Mbit/s, and the top-of-the-line HDX quality needs 4.5 Mbit/s for 1080p.
Webcams are a low-cost extension of this phenomenon. While some webcams can give full-frame-rate video, the picture either is usually small or updates slowly. Internet users can watch animals around an African waterhole, ships in the Panama Canal, traffic at a local roundabout or monitor their own premises, live and in real time. Video chat rooms and video conferencing are also popular with many uses being found for personal webcams, with and without two-way sound. YouTube was founded on 15 February 2005 and is now the leading website for free streaming video with more than two billion users. It uses an HTML5 based web player by default to stream and show video files. Registered users may upload an unlimited amount of video and build their own personal profile. YouTube claims that its users watch hundreds of millions, and upload hundreds of thousands of videos daily.
Social impact
The Internet has enabled new forms of social interaction, activities, and social associations. This phenomenon has given rise to the scholarly study of the sociology of the Internet.
Users
From 2000 to 2009, the number of Internet users globally rose from 394 million to 1.858 billion. By 2010, 22 percent of the world's population had access to computers with 1 billion Google searches every day, 300 million Internet users reading blogs, and 2 billion videos viewed daily on YouTube. In 2014 the world's Internet users surpassed 3 billion or 43.6 percent of world population, but two-thirds of the users came from richest countries, with 78.0 percent of Europe countries population using the Internet, followed by 57.4 percent of the Americas. However, by 2018, Asia alone accounted for 51% of all Internet users, with 2.2 billion out of the 4.3 billion Internet users in the world coming from that region. The number of China's Internet users surpassed a major milestone in 2018, when the country's Internet regulatory authority, China Internet Network Information Centre, announced that China had 802 million Internet users. By 2019, China was the world's leading country in terms of Internet users, with more than 800 million users, followed closely by India, with some 700 million users, with the United States a distant third with 275 million users. However, in terms of penetration, China has a 38.4% penetration rate compared to India's 40% and the United States's 80%. As of 2020, it was estimated that 4.5 billion people use the Internet, more than half of the world's population.
The prevalent language for communication via the Internet has always been English. This may be a result of the origin of the Internet, as well as the language's role as a lingua franca and as a world language. Early computer systems were limited to the characters in the American Standard Code for Information Interchange (ASCII), a subset of the Latin alphabet.
After English (27%), the most requested languages on the World Wide Web are Chinese (25%), Spanish (8%), Japanese (5%), Portuguese and German (4% each), Arabic, French and Russian (3% each), and Korean (2%). By region, 42% of the world's Internet users are based in Asia, 24% in Europe, 14% in North America, 10% in Latin America and the Caribbean taken together, 6% in Africa, 3% in the Middle East and 1% in Australia/Oceania. The Internet's technologies have developed enough in recent years, especially in the use of Unicode, that good facilities are available for development and communication in the world's widely used languages. However, some glitches such as mojibake (incorrect display of some languages' characters) still remain.
In an American study in 2005, the percentage of men using the Internet was very slightly ahead of the percentage of women, although this difference reversed in those under 30. Men logged on more often, spent more time online, and were more likely to be broadband users, whereas women tended to make more use of opportunities to communicate (such as email). Men were more likely to use the Internet to pay bills, participate in auctions, and for recreation such as downloading music and videos. Men and women were equally likely to use the Internet for shopping and banking.
More recent studies indicate that in 2008, women significantly outnumbered men on most social networking services, such as Facebook and Myspace, although the ratios varied with age. In addition, women watched more streaming content, whereas men downloaded more. In terms of blogs, men were more likely to blog in the first place; among those who blog, men were more likely to have a professional blog, whereas women were more likely to have a personal blog.
Splitting by country, in 2012 Iceland, Norway, Sweden, the Netherlands, and Denmark had the highest Internet penetration by the number of users, with 93% or more of the population with access.
Several neologisms exist that refer to Internet users: Netizen (as in "citizen of the net") refers to those actively involved in improving online communities, the Internet in general or surrounding political affairs and rights such as free speech, Internaut refers to operators or technically highly capable users of the Internet, digital citizen refers to a person using the Internet in order to engage in society, politics, and government participation.
Usage
The Internet allows greater flexibility in working hours and location, especially with the spread of unmetered high-speed connections. The Internet can be accessed almost anywhere by numerous means, including through mobile Internet devices. Mobile phones, datacards, handheld game consoles and cellular routers allow users to connect to the Internet wirelessly. Within the limitations imposed by small screens and other limited facilities of such pocket-sized devices, the services of the Internet, including email and the web, may be available. Service providers may restrict the services offered and mobile data charges may be significantly higher than other access methods.
Educational material at all levels from pre-school to post-doctoral is available from websites. Examples range from CBeebies, through school and high-school revision guides and virtual universities, to access to top-end scholarly literature through the likes of Google Scholar. For distance education, help with homework and other assignments, self-guided learning, whiling away spare time or just looking up more detail on an interesting fact, it has never been easier for people to access educational information at any level from anywhere. The Internet in general and the World Wide Web in particular are important enablers of both formal and informal education. Further, the Internet allows universities, in particular, researchers from the social and behavioral sciences, to conduct research remotely via virtual laboratories, with profound changes in reach and generalizability of findings as well as in communication between scientists and in the publication of results.
The low cost and nearly instantaneous sharing of ideas, knowledge, and skills have made collaborative work dramatically easier, with the help of collaborative software. Not only can a group cheaply communicate and share ideas but the wide reach of the Internet allows such groups more easily to form. An example of this is the free software movement, which has produced, among other things, Linux, Mozilla Firefox, and OpenOffice.org (later forked into LibreOffice). Internet chat, whether using an IRC chat room, an instant messaging system, or a social networking service, allows colleagues to stay in touch in a very convenient way while working at their computers during the day. Messages can be exchanged even more quickly and conveniently than via email. These systems may allow files to be exchanged, drawings and images to be shared, or voice and video contact between team members.
Content management systems allow collaborating teams to work on shared sets of documents simultaneously without accidentally destroying each other's work. Business and project teams can share calendars as well as documents and other information. Such collaboration occurs in a wide variety of areas including scientific research, software development, conference planning, political activism and creative writing. Social and political collaboration is also becoming more widespread as both Internet access and computer literacy spread.
The Internet allows computer users to remotely access other computers and information stores easily from any access point. Access may be with computer security, i.e. authentication and encryption technologies, depending on the requirements. This is encouraging new ways of working from home, collaboration and information sharing in many industries. An accountant sitting at home can audit the books of a company based in another country, on a server situated in a third country that is remotely maintained by IT specialists in a fourth. These accounts could have been created by home-working bookkeepers, in other remote locations, based on information emailed to them from offices all over the world. Some of these things were possible before the widespread use of the Internet, but the cost of private leased lines would have made many of them infeasible in practice. An office worker away from their desk, perhaps on the other side of the world on a business trip or a holiday, can access their emails, access their data using cloud computing, or open a remote desktop session into their office PC using a secure virtual private network (VPN) connection on the Internet. This can give the worker complete access to all of their normal files and data, including email and other applications, while away from the office. It has been referred to among system administrators as the Virtual Private Nightmare, because it extends the secure perimeter of a corporate network into remote locations and its employees' homes.
By late 2010s Internet has been described as "the main source of scientific information "for the majority of the global North population".
Social networking and entertainment
Many people use the World Wide Web to access news, weather and sports reports, to plan and book vacations and to pursue their personal interests. People use chat, messaging and email to make and stay in touch with friends worldwide, sometimes in the same way as some previously had pen pals. Social networking services such as Facebook have created new ways to socialize and interact. Users of these sites are able to add a wide variety of information to pages, pursue common interests, and connect with others. It is also possible to find existing acquaintances, to allow communication among existing groups of people. Sites like LinkedIn foster commercial and business connections. YouTube and Flickr specialize in users' videos and photographs. Social networking services are also widely used by businesses and other organizations to promote their brands, to market to their customers and to encourage posts to "go viral". "Black hat" social media techniques are also employed by some organizations, such as spam accounts and astroturfing.
A risk for both individuals and organizations writing posts (especially public posts) on social networking services, is that especially foolish or controversial posts occasionally lead to an unexpected and possibly large-scale backlash on social media from other Internet users. This is also a risk in relation to controversial offline behavior, if it is widely made known. The nature of this backlash can range widely from counter-arguments and public mockery, through insults and hate speech, to, in extreme cases, rape and death threats. The online disinhibition effect describes the tendency of many individuals to behave more stridently or offensively online than they would in person. A significant number of feminist women have been the target of various forms of harassment in response to posts they have made on social media, and Twitter in particular has been criticised in the past for not doing enough to aid victims of online abuse.
For organizations, such a backlash can cause overall brand damage, especially if reported by the media. However, this is not always the case, as any brand damage in the eyes of people with an opposing opinion to that presented by the organization could sometimes be outweighed by strengthening the brand in the eyes of others. Furthermore, if an organization or individual gives in to demands that others perceive as wrong-headed, that can then provoke a counter-backlash.
Some websites, such as Reddit, have rules forbidding the posting of personal information of individuals (also known as doxxing), due to concerns about such postings leading to mobs of large numbers of Internet users directing harassment at the specific individuals thereby identified. In particular, the Reddit rule forbidding the posting of personal information is widely understood to imply that all identifying photos and names must be censored in Facebook screenshots posted to Reddit. However, the interpretation of this rule in relation to public Twitter posts is less clear, and in any case, like-minded people online have many other ways they can use to direct each other's attention to public social media posts they disagree with.
Children also face dangers online such as cyberbullying and approaches by sexual predators, who sometimes pose as children themselves. Children may also encounter material which they may find upsetting, or material that their parents consider to be not age-appropriate. Due to naivety, they may also post personal information about themselves online, which could put them or their families at risk unless warned not to do so. Many parents choose to enable Internet filtering or supervise their children's online activities in an attempt to protect their children from inappropriate material on the Internet. The most popular social networking services, such as Facebook and Twitter, commonly forbid users under the age of 13. However, these policies are typically trivial to circumvent by registering an account with a false birth date, and a significant number of children aged under 13 join such sites anyway. Social networking services for younger children, which claim to provide better levels of protection for children, also exist.
The Internet has been a major outlet for leisure activity since its inception, with entertaining social experiments such as MUDs and MOOs being conducted on university servers, and humor-related Usenet groups receiving much traffic. Many Internet forums have sections devoted to games and funny videos. The Internet pornography and online gambling industries have taken advantage of the World Wide Web. Although many governments have attempted to restrict both industries' use of the Internet, in general, this has failed to stop their widespread popularity.
Another area of leisure activity on the Internet is multiplayer gaming. This form of recreation creates communities, where people of all ages and origins enjoy the fast-paced world of multiplayer games. These range from MMORPG to first-person shooters, from role-playing video games to online gambling. While online gaming has been around since the 1970s, modern modes of online gaming began with subscription services such as GameSpy and MPlayer. Non-subscribers were limited to certain types of game play or certain games. Many people use the Internet to access and download music, movies and other works for their enjoyment and relaxation. Free and fee-based services exist for all of these activities, using centralized servers and distributed peer-to-peer technologies. Some of these sources exercise more care with respect to the original artists' copyrights than others.
Internet usage has been correlated to users' loneliness. Lonely people tend to use the Internet as an outlet for their feelings and to share their stories with others, such as in the "I am lonely will anyone speak to me" thread.
A 2017 book claimed that the Internet consolidates most aspects of human endeavor into singular arenas of which all of humanity are potential members and competitors, with fundamentally negative impacts on mental health as a result. While successes in each field of activity are pervasively visible and trumpeted, they are reserved for an extremely thin sliver of the world's most exceptional, leaving everyone else behind. Whereas, before the Internet, expectations of success in any field were supported by reasonable probabilities of achievement at the village, suburb, city or even state level, the same expectations in the Internet world are virtually certain to bring disappointment today: there is always someone else, somewhere on the planet, who can do better and take the now one-and-only top spot.
Cybersectarianism is a new organizational form which involves: "highly dispersed small groups of practitioners that may remain largely anonymous within the larger social context and operate in relative secrecy, while still linked remotely to a larger network of believers who share a set of practices and texts, and often a common devotion to a particular leader. Overseas supporters provide funding and support; domestic practitioners distribute tracts, participate in acts of resistance, and share information on the internal situation with outsiders. Collectively, members and practitioners of such sects construct viable virtual communities of faith, exchanging personal testimonies and engaging in the collective study via email, on-line chat rooms, and web-based message boards." In particular, the British government has raised concerns about the prospect of young British Muslims being indoctrinated into Islamic extremism by material on the Internet, being persuaded to join terrorist groups such as the so-called "Islamic State", and then potentially committing acts of terrorism on returning to Britain after fighting in Syria or Iraq.
Cyberslacking can become a drain on corporate resources; the average UK employee spent 57 minutes a day surfing the Web while at work, according to a 2003 study by Peninsula Business Services. Internet addiction disorder is excessive computer use that interferes with daily life. Nicholas G. Carr believes that Internet use has other effects on individuals, for instance improving skills of scan-reading and interfering with the deep thinking that leads to true creativity.
Electronic business
Electronic business (e-business) encompasses business processes spanning the entire value chain: purchasing, supply chain management, marketing, sales, customer service, and business relationship. E-commerce seeks to add revenue streams using the Internet to build and enhance relationships with clients and partners. According to International Data Corporation, the size of worldwide e-commerce, when global business-to-business and -consumer transactions are combined, equate to $16 trillion for 2013. A report by Oxford Economics added those two together to estimate the total size of the digital economy at $20.4 trillion, equivalent to roughly 13.8% of global sales.
While much has been written of the economic advantages of Internet-enabled commerce, there is also evidence that some aspects of the Internet such as maps and location-aware services may serve to reinforce economic inequality and the digital divide. Electronic commerce may be responsible for consolidation and the decline of mom-and-pop, brick and mortar businesses resulting in increases in income inequality.
Author Andrew Keen, a long-time critic of the social transformations caused by the Internet, has focused on the economic effects of consolidation from Internet businesses. Keen cites a 2013 Institute for Local Self-Reliance report saying brick-and-mortar retailers employ 47 people for every $10 million in sales while Amazon employs only 14. Similarly, the 700-employee room rental start-up Airbnb was valued at $10 billion in 2014, about half as much as Hilton Worldwide, which employs 152,000 people. At that time, Uber employed 1,000 full-time employees and was valued at $18.2 billion, about the same valuation as Avis Rent a Car and The Hertz Corporation combined, which together employed almost 60,000 people.
Telecommuting
Telecommuting is the performance within a traditional worker and employer relationship when it is facilitated by tools such as groupware, virtual private networks, conference calling, videoconferencing, and VoIP so that work may be performed from any location, most conveniently the worker's home. It can be efficient and useful for companies as it allows workers to communicate over long distances, saving significant amounts of travel time and cost. As broadband Internet connections become commonplace, more workers have adequate bandwidth at home to use these tools to link their home to their corporate intranet and internal communication networks.
Collaborative publishing
Wikis have also been used in the academic community for sharing and dissemination of information across institutional and international boundaries. In those settings, they have been found useful for collaboration on grant writing, strategic planning, departmental documentation, and committee work. The United States Patent and Trademark Office uses a wiki to allow the public to collaborate on finding prior art relevant to examination of pending patent applications. Queens, New York has used a wiki to allow citizens to collaborate on the design and planning of a local park. The English Wikipedia has the largest user base among wikis on the World Wide Web and ranks in the top 10 among all Web sites in terms of traffic.
Politics and political revolutions
The Internet has achieved new relevance as a political tool. The presidential campaign of Howard Dean in 2004 in the United States was notable for its success in soliciting donation via the Internet. Many political groups use the Internet to achieve a new method of organizing for carrying out their mission, having given rise to Internet activism, most notably practiced by rebels in the Arab Spring. The New York Times suggested that social media websites, such as Facebook and Twitter, helped people organize the political revolutions in Egypt, by helping activists organize protests, communicate grievances, and disseminate information.
Many have understood the Internet as an extension of the Habermasian notion of the public sphere, observing how network communication technologies provide something like a global civic forum. However, incidents of politically motivated Internet censorship have now been recorded in many countries, including western democracies.
Philanthropy
The spread of low-cost Internet access in developing countries has opened up new possibilities for peer-to-peer charities, which allow individuals to contribute small amounts to charitable projects for other individuals. Websites, such as DonorsChoose and GlobalGiving, allow small-scale donors to direct funds to individual projects of their choice. A popular twist on Internet-based philanthropy is the use of peer-to-peer lending for charitable purposes. Kiva pioneered this concept in 2005, offering the first web-based service to publish individual loan profiles for funding. Kiva raises funds for local intermediary microfinance organizations that post stories and updates on behalf of the borrowers. Lenders can contribute as little as $25 to loans of their choice, and receive their money back as borrowers repay. Kiva falls short of being a pure peer-to-peer charity, in that loans are disbursed before being funded by lenders and borrowers do not communicate with lenders themselves.
Security
Internet resources, hardware, and software components are the target of criminal or malicious attempts to gain unauthorized control to cause interruptions, commit fraud, engage in blackmail or access private information.
Malware
Malware is malicious software used and distributed via the Internet. It includes computer viruses which are copied with the help of humans, computer worms which copy themselves automatically, software for denial of service attacks, ransomware, botnets, and spyware that reports on the activity and typing of users. Usually, these activities constitute cybercrime. Defense theorists have also speculated about the possibilities of hackers using cyber warfare using similar methods on a large scale.
Surveillance
The vast majority of computer surveillance involves the monitoring of data and traffic on the Internet. In the United States for example, under the Communications Assistance For Law Enforcement Act, all phone calls and broadband Internet traffic (emails, web traffic, instant messaging, etc.) are required to be available for unimpeded real-time monitoring by Federal law enforcement agencies. Packet capture is the monitoring of data traffic on a computer network. Computers communicate over the Internet by breaking up messages (emails, images, videos, web pages, files, etc.) into small chunks called "packets", which are routed through a network of computers, until they reach their destination, where they are assembled back into a complete "message" again. Packet Capture Appliance intercepts these packets as they are traveling through the network, in order to examine their contents using other programs. A packet capture is an information gathering tool, but not an analysis tool. That is it gathers "messages" but it does not analyze them and figure out what they mean. Other programs are needed to perform traffic analysis and sift through intercepted data looking for important/useful information. Under the Communications Assistance For Law Enforcement Act all U.S. telecommunications providers are required to install packet sniffing technology to allow Federal law enforcement and intelligence agencies to intercept all of their customers' broadband Internet and VoIP traffic.
The large amount of data gathered from packet capturing requires surveillance software that filters and reports relevant information, such as the use of certain words or phrases, the access of certain types of web sites, or communicating via email or chat with certain parties. Agencies, such as the Information Awareness Office, NSA, GCHQ and the FBI, spend billions of dollars per year to develop, purchase, implement, and operate systems for interception and analysis of data. Similar systems are operated by Iranian secret police to identify and suppress dissidents. The required hardware and software was allegedly installed by German Siemens AG and Finnish Nokia.
Censorship
Some governments, such as those of Burma, Iran, North Korea, Mainland China, Saudi Arabia and the United Arab Emirates, restrict access to content on the Internet within their territories, especially to political and religious content, with domain name and keyword filters.
In Norway, Denmark, Finland, and Sweden, major Internet service providers have voluntarily agreed to restrict access to sites listed by authorities. While this list of forbidden resources is supposed to contain only known child pornography sites, the content of the list is secret. Many countries, including the United States, have enacted laws against the possession or distribution of certain material, such as child pornography, via the Internet, but do not mandate filter software. Many free or commercially available software programs, called content-control software are available to users to block offensive websites on individual computers or networks, in order to limit access by children to pornographic material or depiction of violence.
Performance
As the Internet is a heterogeneous network, the physical characteristics, including for example the data transfer rates of connections, vary widely. It exhibits emergent phenomena that depend on its large-scale organization.
Traffic volume
The volume of Internet traffic is difficult to measure, because no single point of measurement exists in the multi-tiered, non-hierarchical topology. Traffic data may be estimated from the aggregate volume through the peering points of the Tier 1 network providers, but traffic that stays local in large provider networks may not be accounted for.
Outages
An Internet blackout or outage can be caused by local signalling interruptions. Disruptions of submarine communications cables may cause blackouts or slowdowns to large areas, such as in the 2008 submarine cable disruption. Less-developed countries are more vulnerable due to a small number of high-capacity links. Land cables are also vulnerable, as in 2011 when a woman digging for scrap metal severed most connectivity for the nation of Armenia. Internet blackouts affecting almost entire countries can be achieved by governments as a form of Internet censorship, as in the blockage of the Internet in Egypt, whereby approximately 93% of networks were without access in 2011 in an attempt to stop mobilization for anti-government protests.
Energy use
Estimates of the Internet's electricity usage have been the subject of controversy, according to a 2014 peer-reviewed research paper that found claims differing by a factor of 20,000 published in the literature during the preceding decade, ranging from 0.0064 kilowatt hours per gigabyte transferred (kWh/GB) to 136 kWh/GB. The researchers attributed these discrepancies mainly to the year of reference (i.e. whether efficiency gains over time had been taken into account) and to whether "end devices such as personal computers and servers are included" in the analysis.
In 2011, academic researchers estimated the overall energy used by the Internet to be between 170 and 307 GW, less than two percent of the energy used by humanity. This estimate included the energy needed to build, operate, and periodically replace the estimated 750 million laptops, a billion smart phones and 100 million servers worldwide as well as the energy that routers, cell towers, optical switches, Wi-Fi transmitters and cloud storage devices use when transmitting Internet traffic. According to a non-peer reviewed study published in 2018 by The Shift Project (a French think tank funded by corporate sponsors), nearly 4% of global CO2 emissions could be attributed to global data transfer and the necessary infrastructure. The study also said that online video streaming alone accounted for 60% of this data transfer and therefore contributed to over 300 million tons of CO2 emission per year, and argued for new "digital sobriety" regulations restricting the use and size of video files.
See also
Crowdfunding
Crowdsourcing
Darknet
Deep web
Freenet
Internet industry jargon
Index of Internet-related articles
Internet metaphors
Internet video
"Internets"
Open Systems Interconnection
Outline of the Internet
Notes
References
Sources
Further reading
First Monday, a peer-reviewed journal on the Internet by the University Library of the University of Illinois at Chicago,
The Internet Explained, Vincent Zegna & Mike Pepper, Sonet Digital, November 2005, pp. 1–7.
External links
The Internet Society
Living Internet, Internet history and related information, including information from many creators of the Internet
1969 establishments in the United States
American inventions
Computer-related introductions in 1969
Computer-related introductions in 1989
Cultural globalization
Digital technology
Mass media technology
New media
Promotion and marketing communications
Public services
Telegraphy
Transport systems
Virtual reality
Main topic articles |
14722 | https://en.wikipedia.org/wiki/Irssi | Irssi | Irssi ( ) is an IRC client program for Linux, FreeBSD, macOS and Microsoft Windows. It was originally written by Timo Sirainen, and released under the terms of the GNU GPL-2.0-or-later in January 1999.
Features
Irssi is written in the C programming language and in normal operation uses a text-mode user interface.
According to the developers, Irssi was written from scratch, not based on ircII (like BitchX and epic). This freed the developers from having to deal with the constraints of an existing codebase, allowing them to maintain tighter control over issues such as security and customization. Numerous Perl scripts have been made available for Irssi to customise how it looks and operates. Plugins are available which add encryption and protocols such as ICQ and XMPP.
Irssi may be configured by using its user interface or by manually editing its configuration files, which use a syntax resembling Perl data structures.
Distributions
Irssi was written primarily to run on Unix-like operating systems, and binaries and packages are available for Gentoo Linux, Debian, Slackware, SUSE (openSUSE), Frugalware, Fedora, FreeBSD, OpenBSD, NetBSD, DragonFly BSD, Solaris, Arch Linux, Ubuntu, NixOS, and others.
Irssi builds and runs on Microsoft Windows under Cygwin, and in 2006, an official Windows standalone build became available.
For the Unix-based macOS, text mode ports are available from the Homebrew, MacPorts, and Fink package managers, and two graphical clients have been written based on Irssi, IrssiX, and MacIrssi. The Cocoa client Colloquy was previously based on Irssi, but it now uses its own IRC core implementation.
See also
Comparison of Internet Relay Chat clients
Shell account
WeeChat
References
External links
irssi on GitHub
on Libera.chat
Internet Relay Chat clients
Free Internet Relay Chat clients
MacOS Internet Relay Chat clients
Unix Internet Relay Chat clients
Windows Internet Relay Chat clients
Free software programmed in C
Cross-platform software
1999 software
Software that uses ncurses
Console applications
Software developed in Finland |
14730 | https://en.wikipedia.org/wiki/Internet%20Relay%20Chat | Internet Relay Chat | Internet Relay Chat (IRC) is a text-based chat (instant messaging) system. IRC is designed for group communication in discussion forums, called channels, but also allows one-on-one communication via private messages as well as chat and data transfer, including file sharing.
Internet Relay Chat is implemented as an application layer protocol to facilitate communication in the form of text. The chat process works on a client–server networking model. Users connect to an IRC server, which may be part of a larger IRC network. Users connect using a client, which may be a web app, a standalone desktop program, or embedded into part of a larger program. Examples of programs used to connect include Mibbit, IRCCloud, KiwiIRC, and MIRC.
IRC usage has been declining steadily since 2003, losing 60 percent of its users. In April 2011, the top 100 IRC networks served more than half a million users at a time. there are 481 different IRC networks known to be operating, of which the open source Libera Chat, founded in May 2021, has the most users, with 20,374 channels on 26 servers; between them, the top 100 IRC networks share over 100 thousand channels operating on about one thousand servers.
History
IRC was created by Jarkko Oikarinen in August 1988 to replace a program called MUT (MultiUser Talk) on a BBS called OuluBox at the University of Oulu in Finland, where he was working at the Department of Information Processing Science. Jarkko intended to extend the BBS software he administered, to allow news in the Usenet style, real time discussions and similar BBS features. The first part he implemented was the chat part, which he did with borrowed parts written by his friends Jyrki Kuoppala and Jukka Pihl. The first IRC network was running on a single server named tolsun.oulu.fi. Oikarinen found inspiration in a chat system known as Bitnet Relay, which operated on the BITNET.
Jyrki Kuoppala pushed Oikarinen to ask Oulu University to free the IRC code so that it also could be run outside of Oulu, and after they finally got it released, Jyrki Kuoppala immediately installed another server. This was the first "IRC network". Oikarinen got some friends at the Helsinki University and Tampere University to start running IRC servers when his number of users increased and other universities soon followed. At this time Oikarinen realized that the rest of the BBS features probably wouldn't fit in his program.
Oikarinen got in touch with people at the University of Denver and Oregon State University. They had their own IRC network running and wanted to connect to the Finnish network. They had obtained the program from one of Oikarinen's friends, Vijay Subramaniam—the first non-Finnish person to use IRC. IRC then grew larger and got used on the entire Finnish national network—FUNET—and then connected to Nordunet, the Scandinavian branch of the Internet. In November 1988, IRC had spread across the Internet and in the middle of 1989, there were some 40 servers worldwide.
EFnet
In August 1990, the first major disagreement took place in the IRC world. The "A-net" (Anarchy net) included a server named eris.berkeley.edu. It was all open, required no passwords and had no limit on the number of connects. As Greg "wumpus" Lindahl explains: "it had a wildcard server line, so people were hooking up servers and nick-colliding everyone". The "Eris Free Network", EFnet, made the eris machine the first to be Q-lined (Q for quarantine) from IRC. In wumpus' words again: "Eris refused to remove that line, so I formed EFnet. It wasn't much of a fight; I got all the hubs to join, and almost everyone else got carried along." A-net was formed with the eris servers, while EFnet was formed with the non-eris servers. History showed most servers and users went with EFnet. Once A-net disbanded, the name EFnet became meaningless, and once again it was the one and only IRC network.
Around that time IRC was used to report on the 1991 Soviet coup d'état attempt throughout a media blackout. It was previously used in a similar fashion during the Gulf War. Chat logs of these and other events are kept in the ibiblio archive.
Undernet fork
Another fork effort, the first that made a lasting difference, was initiated by "Wildthang" in the United States in October 1992. (It forked off the EFnet ircd version 2.8.10). It was meant to be just a test network to develop bots on but it quickly grew to a network "for friends and their friends". In Europe and Canada a separate new network was being worked on and in December the French servers connected to the Canadian ones, and by the end of the month, the French and Canadian network was connected to the US one, forming the network that later came to be called "The Undernet".
The "undernetters" wanted to take ircd further in an attempt to make it less bandwidth consumptive and to try to sort out the channel chaos (netsplits and takeovers) that EFnet started to suffer from. For the latter purpose, the Undernet implemented timestamps, new routing and offered the CService—a program that allowed users to register channels and then attempted to protect them from troublemakers. The first server list presented, from 15 February 1993, includes servers from the U.S., Canada, France, Croatia and Japan. On 15 August, the new user count record was set to 57 users.
In May 1993, RFC 1459 was published and details a simple protocol for client/server operation, channels, one-to-one and one-to-many conversations. It is notable that a significant number of extensions like CTCP, colors and formats are not included in the protocol specifications, nor is character encoding, which led various implementations of servers and clients to diverge. Software implementation varied significantly from one network to the other, each network implementing their own policies and standards in their own code bases.
DALnet fork
During the summer of 1994, the Undernet was itself forked. The new network was called DALnet (named after its founder: dalvenjah), formed for better user service and more user and channel protections. One of the more significant changes in DALnet was use of longer nicknames (the original ircd limit being 9 letters). DALnet ircd modifications were made by Alexei "Lefler" Kosut. DALnet was thus based on the Undernet ircd server, although the DALnet pioneers were EFnet abandoners. According to James Ng, the initial DALnet people were "ops in #StarTrek sick from the constant splits/lags/takeovers/etc".
DALnet quickly offered global WallOps (IRCop messages that can be seen by users who are +w (/mode NickName +w)), longer nicknames, Q:Lined nicknames (nicknames that cannot be used i.e. ChanServ, IRCop, NickServ, etc.), global K:Lines (ban of one person or an entire domain from a server or the entire network), IRCop only communications: GlobOps, +H mode showing that an IRCop is a "helpop" etc. Much of DALnet's new functions were written in early 1995 by Brian "Morpher" Smith and allow users to own nicknames, control channels, send memos, and more.
IRCnet fork
In July 1996, after months of flame wars and discussions on the mailing list, there was yet another split due to disagreement in how the development of the ircd should evolve. Most notably, the "European" (most of those servers were in Europe) side that later named itself IRCnet argued for nick and channel delays whereas the EFnet side argued for timestamps. There were also disagreements about policies: the European side had started to establish a set of rules directing what IRCops could and could not do, a point of view opposed by the US side.
Most (not all) of the IRCnet servers were in Europe, while most of the EFnet servers were in the US. This event is also known as "The Great Split" in many IRC societies. EFnet has since (as of August 1998) grown and passed the number of users it had then. In the (northern) autumn of the year 2000, EFnet had some 50,000 users and IRCnet 70,000.
Modern IRC
IRC has changed much over its life on the Internet. New server software has added a multitude of new features.
Services: Network-operated bots to facilitate registration of nicknames and channels, sending messages for offline users and network operator functions.
Extra modes: While the original IRC system used a set of standard user and channel modes, new servers add many new modes for features such as removing color codes from text, or obscuring a user's hostmask ("cloaking") to protect from denial-of-service attacks.
Proxy detection: Most modern servers support detection of users attempting to connect through an insecure (misconfigured or exploited) proxy server, which can then be denied a connection. This proxy detection software is used by several networks, although that real time list of proxies is defunct since early 2006.
Additional commands: New commands can be such things as shorthand commands to issue commands to Services, to network-operator-only commands to manipulate a user's hostmask.
Encryption: For the client-to-server leg of the connection TLS might be used (messages cease to be secure once they are relayed to other users on standard connections, but it makes eavesdropping on or wiretapping an individual's IRC sessions difficult). For client-to-client communication, SDCC (Secure DCC) can be used.
Connection protocol: IRC can be connected to via IPv4, the old version of the Internet Protocol, or by IPv6, the current standard of the protocol.
, a new standardization effort is under way under a working group called IRCv3, which focuses on more advanced client features like instant notifications, better history support and improved security. , no major IRC networks have fully adopted the proposed standard.
After its golden era during the 1990s and early 2000s (240,000 users on QuakeNet in 2004), IRC has seen a significant decline, losing around 60% of users between 2003 and 2012, with users moving to newer social media platforms like Facebook or Twitter, but also to open platforms like XMPP which was developed in 1999. Certain networks like Freenode have not followed the overall trend and have more than quadrupled in size during the same period. However, Freenode, which in 2016 had around 90,000 users, has since declined to about 9,300 users.
The largest IRC networks have traditionally been grouped as the "Big Four"—a designation for networks that top the statistics. The Big Four networks change periodically, but due to the community nature of IRC there are a large number of other networks for users to choose from.
Historically the "Big Four" were:
EFnet
IRCnet
Undernet
DALnet
IRC reached 6 million simultaneous users in 2001 and 10 million users in 2003, dropping to 371k in 2018.
, the largest IRC networks are:
Libera Chat – around 48.7k users at peak hours
OFTC – around 19.4k users at peak hours
IRCnet – around 17.9k users at peak hours
Undernet – around 13.4k users at peak hours
Rizon – around 10.5k users at peak hours
EFnet – around 10.4k users at peak hours
Freenode – around 9.3k users at peak hours
QuakeNet – around 8.4k users at peak hours
DALnet – around 7.9k users at peak hours
The top 100 IRC networks have around 228k users connected at peak hours.
Timeline
Timeline of major servers:
EFnet, 1990 to present
Undernet, 1992 to present
DALnet, 1994 to present
freenode, 1995 to present
IRCnet, 1996 to present
QuakeNet, 1997 to present
Open and Free Technology Community, 2001 to present
Rizon, 2002 to present
Libera Chat, 2021 to present
Technical information
IRC is an open protocol that uses TCP and, optionally, TLS. An IRC server can connect to other IRC servers to expand the IRC network. Users access IRC networks by connecting a client to a server. There are many client implementations, such as mIRC, HexChat and irssi, and server implementations, e.g. the original IRCd. Most IRC servers do not require users to register an account but a nick is required before being connected.
IRC was originally a plain text protocol (although later extended), which on request was assigned port 194/TCP by IANA. However, the de facto standard has always been to run IRC on 6667/TCP and nearby port numbers (for example TCP ports 6660–6669, 7000) to avoid having to run the IRCd software with root privileges.
The protocol specified that characters were 8-bit but did not specify the character encoding the text was supposed to use. This can cause problems when users using different clients and/or different platforms want to converse.
All client-to-server IRC protocols in use today are descended from the protocol implemented in the irc2.4.0 version of the IRC2 server, and documented in RFC 1459. Since RFC 1459 was published, the new features in the irc2.10 implementation led to the publication of several revised protocol documents (RFC 2810, RFC 2811, RFC 2812 and RFC 2813); however, these protocol changes have not been widely adopted among other implementations.
Although many specifications on the IRC protocol have been published, there is no official specification, as the protocol remains dynamic. Virtually no clients and very few servers rely strictly on the above RFCs as a reference.
Microsoft made an extension for IRC in 1998 via the proprietary IRCX. They later stopped distributing software supporting IRCX, instead developing the proprietary MSNP.
The standard structure of a network of IRC servers is a tree. Messages are routed along only necessary branches of the tree but network state is sent to every server and there is generally a high degree of implicit trust between servers. However, this architecture has a number of problems. A misbehaving or malicious server can cause major damage to the network and any changes in structure, whether intentional or a result of conditions on the underlying network, require a net-split and net-join. This results in a lot of network traffic and spurious quit/join messages to users and temporary loss of communication to users on the splitting servers. Adding a server to a large network means a large background bandwidth load on the network and a large memory load on the server. Once established, however, each message to multiple recipients is delivered in a fashion similar to multicast, meaning each message travels a network link exactly once. This is a strength in comparison to non-multicasting protocols such as Simple Mail Transfer Protocol (SMTP) or Extensible Messaging and Presence Protocol (XMPP).
An IRC daemon can also be used on a local area network (LAN). IRC can thus be used to facilitate communication between people within the local area network (internal communication).
Commands and replies
IRC has a line-based structure. Clients send single-line messages to the server, receive replies to those messages and receive copies of some messages sent by other clients. In most clients, users can enter commands by prefixing them with a '/'. Depending on the command, these may either be handled entirely by the client, or (generally for commands the client does not recognize) passed directly to the server, possibly with some modification.
Due to the nature of the protocol, automated systems cannot always correctly pair a sent command with its reply with full reliability and are subject to guessing.
Channels
The basic means of communicating to a group of users in an established IRC session is through a channel. Channels on a network can be displayed using the IRC command LIST, which lists all currently available channels that do not have the modes +s or +p set, on that particular network.
Users can join a channel using the JOIN command, in most clients available as /join #channelname. Messages sent to the joined channels are then relayed to all other users.
Channels that are available across an entire IRC network are prefixed with a '#', while those local to a server use '&'. Other less common channel types include '+' channels—'modeless' channels without operators—and '!' channels, a form of timestamped channel on normally non-timestamped networks.
Modes
Users and channels may have modes that are represented by single case-sensitive letters and are set using the MODE command. User modes and channel modes are separate and can use the same letter to mean different things (e.g. user mode "i" is invisible mode while channel mode "i" is invite only.) Modes are usually set and unset using the mode command that takes a target (user or channel), a set of modes to set (+) or unset (-) and any parameters the modes need.
Some channel modes take parameters and other channel modes apply to a user on a channel or add or remove a mask (e.g. a ban mask) from a list associated with the channel rather than applying to the channel as a whole. Modes that apply to users on a channel have an associated symbol that is used to represent the mode in names replies (sent to clients on first joining a channel and use of the names command) and in many clients also used to represent it in the client's displayed list of users in a channel or to display an own indicator for a user's modes.
In order to correctly parse incoming mode messages and track channel state the client must know which mode is of which type and for the modes that apply to a user on a channel which symbol goes with which letter. In early implementations of IRC this had to be hard-coded in the client but there is now a de facto standard extension to the protocol called ISUPPORT that sends this information to the client at connect time using numeric 005.
There is a small design fault in IRC regarding modes that apply to users on channels: the names message used to establish initial channel state can only send one such mode per user on the channel, but multiple such modes can be set on a single user. For example, if a user holds both operator status (+o) and voice status (+v) on a channel, a new client will be unable to see the mode with less priority (i.e. voice). Workarounds for this are possible on both the client and server side but none are widely implemented.
Standard (RFC 1459) modes
Many daemons and networks have added extra modes or modified the behavior of modes in the above list.
Channel operators
A channel operator is a client on an IRC channel that manages the channel.
IRC channel operators can be easily seen by the a symbol or icon next to their name (varies by client implementation, commonly a "@" symbol prefix, a green circle, or a Latin letter "+o"/"o").
On most networks, an operator can:
Kick a user.
Ban a user.
Give another user IRC Channel Operator Status or IRC Channel Voice Status.
Change the IRC Channel topic while channel mode +t is set.
Change the IRC Channel Mode locks.
IRC operators
There are also users who maintain elevated rights on their local server, or the entire network; these are called IRC operators, sometimes shortened to IRCops or Opers (not to be confused with channel operators). As the implementation of the IRCd varies, so do the privileges of the IRC operator on the given IRCd. RFC 1459 claims that IRC operators are "a necessary evil" to keep a clean state of the network, and as such they need to be able to disconnect and reconnect servers. Additionally, to prevent malicious users or even harmful automated programs from entering IRC, IRC operators are usually allowed to disconnect clients and completely ban IP addresses or complete subnets. Networks that carry services (NickServ et al.) usually allow their IRC operators also to handle basic "ownership" matters. Further privileged rights may include overriding channel bans (being able to join channels they would not be allowed to join, if they were not opered), being able to op themselves on channels where they would not be able without being opered, being auto-opped on channels always and so forth.
Hostmasks
A hostmask is a unique identifier of an IRC client connected to an IRC server. IRC servers, services, and other clients, including bots, can use it to identify a specific IRC session.
The format of a hostmask is nick!user@host. The hostmask looks similar to, but should not be confused with an e-mail address.
The nick part is the nickname chosen by the user and may be changed while connected.
The user part is the username reported by ident on the client. If ident is not available on the client, the username specified when the client connected is used after being prefixed with a tilde.
The host part is the hostname the client is connecting from. If the IP address of the client cannot be resolved to a valid hostname by the server, it is used instead of the hostname.
Because of the privacy implications of exposing the IP address or hostname of a client, some IRC daemons also provide privacy features, such as InspIRCd or UnrealIRCd's "+x" mode. This hashes a client IP address or masks part of a client's hostname, making it unreadable to users other than IRCops. Users may also have the option of requesting a "virtual host" (or "vhost"), to be displayed in the hostmask to allow further anonymity. Some IRC networks, such as Libera Chat or Freenode, use these as "cloaks" to indicate that a user is affiliated with a group or project.
URI scheme
There are three recognized uniform resource identifier (URI) schemes for Internet Relay Chat: irc, ircs, and irc6. When supported, they allow hyperlinks of various forms, including
irc://<host>[:<port>]/[<channel>[?<channel_keyword>]]
ircs://<host>[:<port>]/[<channel>[?<channel_keyword>]]
irc6://<host>[:<port>]/[<channel>[?<channel_keyword>]]
(where items enclosed within brackets ([,]) are optional) to be used to (if necessary) connect to the specified host (or network, if known to the IRC client) and join the specified channel. (This can be used within the client itself, or from another application such as a Web browser). irc is the default URI, irc6 specifies a connection to be made using IPv6, and ircs specifies a secure connection.
Per the specification, the usual hash symbol (#) will be prepended to channel names that begin with an alphanumeric character—allowing it to be omitted. Some implementations (for example, mIRC) will do so unconditionally resulting in a (usually unintended) extra (for example, ##channel), if included in the URL.
Some implementations allow multiple channels to be specified, separated by commas.
Challenges
Issues in the original design of IRC were the amount of shared state data being a limitation on its scalability, the absence of unique user identifications leading to the nickname collision problem, lack of protection from netsplits by means of cyclic routing, the trade-off in scalability for the sake of real-time user presence information, protocol weaknesses providing a platform for abuse, no transparent and optimizable message passing, and no encryption. Some of these issues have been addressed in Modern IRC.
Attacks
Because IRC connections may be unencrypted and typically span long time periods, they are an attractive target for DoS/DDoS attackers and hackers. Because of this, careful security policy is necessary to ensure that an IRC network is not susceptible to an attack such as a takeover war. IRC networks may also K-line or G-line users or servers that have a harming effect.
Some IRC servers support SSL/TLS connections for security purposes. This helps stop the use of packet sniffer programs to obtain the passwords of IRC users, but has little use beyond this scope due to the public nature of IRC channels. SSL connections require both client and server support (that may require the user to install SSL binaries and IRC client specific patches or modules on their computers). Some networks also use SSL for server-to-server connections, and provide a special channel flag (such as +S) to only allow SSL-connected users on the channel, while disallowing operator identification in clear text, to better utilize the advantages that SSL provides.
IRC served as an early laboratory for many kinds of Internet attacks, such as using fake ICMP unreachable messages to break TCP-based IRC connections (nuking) to annoy users or facilitate takeovers.
Abuse prevention
One of the most contentious technical issues surrounding IRC implementations, which survives to this day, is the merit of "Nick/Channel Delay" vs. "Timestamp" protocols. Both methods exist to solve the problem of denial-of-service attacks, but take very different approaches.
The problem with the original IRC protocol as implemented was that when two servers split and rejoined, the two sides of the network would simply merge their channels. If a user could join on a "split" server, where a channel that existed on the other side of the network was empty, and gain operator status, they would become a channel operator of the "combined" channel after the netsplit ended; if a user took a nickname that existed on the other side of the network, the server would kill both users when rejoining (a "nick collision"). This was often abused to "mass-kill" all users on a channel, thus creating "opless" channels where no operators were present to deal with abuse. Apart from causing problems within IRC, this encouraged people to conduct denial-of-service attacks against IRC servers in order to cause netsplits, which they would then abuse.
The nick delay (ND) and channel delay (CD) strategies aim to prevent abuse by delaying reconnections and renames. After a user signs off and the nickname becomes available, or a channel ceases to exist because all its users parted (as often happens during a netsplit), the server will not allow any user to use that nickname or join that channel, until a certain period of time (the delay) has passed. The idea behind this is that even if a netsplit occurs, it is useless to an abuser because they cannot take the nickname or gain operator status on a channel, and thus no collision of a nickname or "merging" of a channel can occur. To some extent, this inconveniences legitimate users, who might be forced to briefly use a different name after rejoining (appending an underscore is popular).
The timestamp protocol is an alternative to nick/channel delays which resolves collisions using timestamped priority. Every nickname and channel on the network is assigned a timestampthe date and time when it was created. When a netsplit occurs, two users on each side are free to use the same nickname or channel, but when the two sides are joined, only one can survive. In the case of nicknames, the newer user, according to their TS, is killed; when a channel collides, the members (users on the channel) are merged, but the channel operators on the "losing" side of the split lose their channel operator status.
TS is a much more complicated protocol than ND/CD, both in design and implementation, and despite having gone through several revisions, some implementations still have problems with "desyncs" (where two servers on the same network disagree about the current state of the network), and allowing too much leniency in what was allowed by the "losing" side. Under the original TS protocols, for example, there was no protection against users setting bans or other modes in the losing channel that would then be merged when the split rejoined, even though the users who had set those modes lost their channel operator status. Some modern TS-based IRC servers have also incorporated some form of ND and/or CD in addition to timestamping in an attempt to further curb abuse.
Most networks today use the timestamping approach. The timestamp versus ND/CD disagreements caused several servers to split away from EFnet and form the newer IRCnet. After the split, EFnet moved to a TS protocol, while IRCnet used ND/CD.
In recent versions of the IRCnet ircd, as well as ircds using the TS6 protocol (including Charybdis), ND has been extended/replaced by a mechanism called SAVE. This mechanism assigns every client a UID upon connecting to an IRC server. This ID starts with a number, which is forbidden in nicks (although some ircds, namely IRCnet and InspIRCd, allow clients to switch to their own UID as the nickname).
If two clients with the same nickname join from different sides of a netsplit ("nick collision"), the first server to see this collision will force both clients to change their nick to their UID, thus saving both clients from being disconnected. On IRCnet, the nickname will also be locked for some time (ND) to prevent both clients from changing back to the original nickname, thus colliding again.
Clients
Client software
Client software exists for various operating systems or software packages, as well as web-based or inside games. Many different clients are available for the various operating systems, including Windows, Unix and Linux, macOS and mobile operating systems (such as iOS and Android). On Windows, mIRC is one of the most popular clients.
Some programs which are extensible through plug-ins also serve as platforms for IRC clients. For instance, a client called ERC, written entirely in Emacs Lisp, is included in v.22.3 of Emacs. Therefore, any platform that can run Emacs can run ERC.
A number of web browsers have built-in IRC clients, such as Opera (version 12.18 and earlier) and the ChatZilla add-on for Mozilla Firefox (for Firefox 56 and earlier; included as a built-in component of SeaMonkey). Web-based clients, such as Mibbit and open source KiwiIRC, can run in most browsers.
Games such as War§ow, Unreal Tournament (up to Unreal Tournament 2004), Uplink, Spring Engine-based games, 0 A.D. and ZDaemon have included IRC.
Ustream's chat interface is IRC with custom authentication as well as Twitch's (formerly Justin.tv).
Bots
A typical use of bots in IRC is to provide IRC services or specific functionality within a channel such as to host a chat-based game or provide notifications of external events. However, some IRC bots are used to launch malicious attacks such as denial of service, spamming, or exploitation.
Bouncer
A program that runs as a daemon on a server and functions as a persistent proxy is known as a BNC or bouncer. The purpose is to maintain a connection to an IRC server, acting as a relay between the server and client, or simply to act as a proxy. Should the client lose network connectivity, the BNC may stay connected and archive all traffic for later delivery, allowing the user to resume their IRC session without disrupting their connection to the server.
Furthermore, as a way of obtaining a bouncer-like effect, an IRC client (typically text-based, for example Irssi) may be run on an always-on server to which the user connects via ssh. This also allows devices that only have ssh functionality, but no actual IRC client installed themselves, to connect to the IRC, and it allows sharing of IRC sessions.
To keep the IRC client from quitting when the ssh connection closes, the client can be run inside a terminal multiplexer such as GNU Screen or tmux, thus staying connected to the IRC network(s) constantly and able to log conversation in channels that the user is interested in, or to maintain a channel's presence on the network. Modelled after this setup, in 2004 an IRC client following the client–server, called Smuxi, was launched.
Search engines
There are numerous search engines available to aid the user in finding what they are looking for on IRC. Generally the search engine consists of two parts, a "back-end" (or "spider/crawler") and a front-end "search engine".
The back-end (spider/webcrawler) is the work horse of the search engine. It is responsible for crawling IRC servers to index the information being sent across them. The information that is indexed usually consists solely of channel text (text that is publicly displayed in public channels). The storage method is usually some sort of relational database, like MySQL or Oracle.
The front-end "search engine" is the user interface to the database. It supplies users with a way to search the database of indexed information to retrieve the data they are looking for. These front-end search engines can also be coded in numerous programming languages.
Most search engines have their own spider that is a single application responsible for crawling IRC and indexing data itself; however, others are "user based" indexers. The latter rely on users to install their "add-on" to their IRC client; the add-on is what sends the database the channel information of whatever channels the user happens to be on.
Many users have implemented their own ad hoc search engines using the logging features built into many IRC clients. These search engines are usually implemented as bots and dedicated to a particular channel or group of associated channels.
Character encoding
IRC still lacks a single globally accepted standard convention for how to transmit characters outside the 7-bit ASCII repertoire.
IRC servers normally transfer messages from a client to another client just as byte sequences, without any interpretation or recoding of characters. The IRC protocol (unlike e.g. MIME or HTTP) lacks mechanisms for announcing and negotiating character encoding options. This has put the responsibility for choosing the appropriate character codec on the client. In practice, IRC channels have largely used the same character encodings that were also used by operating systems (in particular Unix derivatives) in the respective language communities:
7-bit era: In the early days of IRC, especially among Scandinavian and Finnish language users, national variants of ISO 646 were the dominant character encodings. These encode non-ASCII characters like Ä Ö Å ä ö å at code positions 0x5B 0x5C 0x5D 0x7B 0x7C 0x7D (US-ASCII: [ \ ] { | }). That is why these codes are always allowed in nicknames. According to RFC 1459, { | } in nicknames should be treated as lowercase equivalents of [ \ ] respectively. By the late 1990s, the use of 7-bit encodings had disappeared in favour of ISO 8859-1, and such equivalence mappings were dropped from some IRC daemons.
8-bit era: Since the early 1990s, 8-bit encodings such as ISO 8859-1 have become commonly used for European languages. Russian users had a choice of KOI8-R, ISO 8859-5 and CP1251, and since about 2000, modern Russian IRC networks convert between these different commonly used encodings of the Cyrillic script.
Multi-byte era: For a long time, East Asian IRC channels with logographic scripts in China, Japan, and Korea have been using multi-byte encodings such as EUC or ISO-2022-JP. With the common migration from ISO 8859 to UTF-8 on Linux and Unix platforms since about 2002, UTF-8 has become an increasingly popular substitute for many of the previously used 8-bit encodings in European channels. Some IRC clients are now capable of reading messages both in ISO 8859-1 or UTF-8 in the same channel, heuristically autodetecting which encoding is used. The shift to UTF-8 began in particular on Finnish-speaking IRC (Merkistö (Finnish)).
Today, the UTF-8 encoding of Unicode/ISO 10646 would be the most likely contender for a single future standard character encoding for all IRC communication, if such standard ever relaxed the 510-byte message size restriction. UTF-8 is ASCII compatible and covers the superset of all other commonly used coded character set standards.
File sharing
Much like conventional P2P file sharing, users can create file servers that allow them to share files with each other by using customised IRC bots or scripts for their IRC client. Often users will group together to distribute warez via a network of IRC bots.
Technically, IRC provides no file transfer mechanisms itself; file sharing is implemented by IRC clients, typically using the Direct Client-to-Client (DCC) protocol, in which file transfers are negotiated through the exchange of private messages between clients. The vast majority of IRC clients feature support for DCC file transfers, hence the view that file sharing is an integral feature of IRC. The commonplace usage of this protocol, however, sometimes also causes DCC spam. DCC commands have also been used to exploit vulnerable clients into performing an action such as disconnecting from the server or exiting the client.
See also
Chat room
Client-to-client protocol
Comparison of instant messaging protocols
Comparison of IRC clients
Comparison of mobile IRC clients
The Hamnet Players
Internet slang
List of IRC commands
Serving channel
Matrix (protocol) and XMPP, similar chat protocols
Citations
General bibliography
Further reading
External links
IRC Numerics List
History of IRC
IRC.org – Technical and Historical IRC6 information; Articles on the history of IRC
IRChelp.org – Internet Relay Chat (IRC) help archive; Large archive of IRC-related documents
IRCv3 – Working group of developers, who add new features to the protocol and write specs for them
IRC-Source – Internet Relay Chat (IRC) network and channel search engine with historical data
irc.netsplit.de – Internet Relay Chat (IRC) network listing with historical data
1988 software
Application layer protocols
Computer-related introductions in 1988
Finnish inventions
Internet terminology
Virtual communities |
14739 | https://en.wikipedia.org/wiki/IEEE%20802.11 | IEEE 802.11 | IEEE 802.11 is part of the IEEE 802 set of local area network (LAN) technical standards, and specifies the set of media access control (MAC) and physical layer (PHY) protocols for implementing wireless local area network (WLAN) computer communication. The standard and amendments provide the basis for wireless network products using the Wi-Fi brand and are the world's most widely used wireless computer networking standards. IEEE 802.11 is used in most home and office networks to allow laptops, printers, smartphones, and other devices to communicate with each other and access the Internet without connecting wires.
The standards are created and maintained by the Institute of Electrical and Electronics Engineers (IEEE) LAN/MAN Standards Committee (IEEE 802). The base version of the standard was released in 1997 and has had subsequent amendments. While each amendment is officially revoked when it is incorporated in the latest version of the standard, the corporate world tends to market to the revisions because they concisely denote the capabilities of their products. As a result, in the marketplace, each revision tends to become its own standard.
IEEE 802.11 uses various frequencies including, but not limited to, 2.4 GHz, 5 GHz, 6 GHz, and 60 GHz frequency bands. Although IEEE 802.11 specifications list channels that might be used, the radio frequency spectrum availability allowed varies significantly by regulatory domain.
The protocols are typically used in conjunction with IEEE 802.2, and are designed to interwork seamlessly with Ethernet, and are very often used to carry Internet Protocol traffic.
General description
The 802.11 family consists of a series of half-duplex over-the-air modulation techniques that use the same basic protocol. The 802.11 protocol family employs carrier-sense multiple access with collision avoidance whereby equipment listens to a channel for other users (including non 802.11 users) before transmitting each frame (some use the term "packet", which may be ambiguous: "frame" is more technically correct).
802.11-1997 was the first wireless networking standard in the family, but 802.11b was the first widely accepted one, followed by 802.11a, 802.11g, 802.11n, and 802.11ac. Other standards in the family (c–f, h, j) are service amendments that are used to extend the current scope of the existing standard, which amendments may also include corrections to a previous specification.
802.11b and 802.11g use the 2.4-GHz ISM band, operating in the United States under Part 15 of the U.S. Federal Communications Commission Rules and Regulations. 802.11n can also use that 2.4-GHz band. Because of this choice of frequency band, 802.11b/g/n equipment may occasionally suffer interference in the 2.4-GHz band from microwave ovens, cordless telephones, and Bluetooth devices. 802.11b and 802.11g control their interference and susceptibility to interference by using direct-sequence spread spectrum (DSSS) and orthogonal frequency-division multiplexing (OFDM) signaling methods, respectively.
802.11a uses the 5 GHz U-NII band which, for much of the world, offers at least 23 non-overlapping, 20-MHz-wide channels. This is an advantage over the 2.4-GHz, ISM-frequency band, which offers only three non-overlapping, 20-MHz-wide channels where other adjacent channels overlap (see: list of WLAN channels). Better or worse performance with higher or lower frequencies (channels) may be realized, depending on the environment. 802.11n can use either the 2.4 GHz or 5 GHz band; 802.11ac uses only the 5 GHz band.
The segment of the radio frequency spectrum used by 802.11 varies between countries. In the US, 802.11a and 802.11g devices may be operated without a license, as allowed in Part 15 of the FCC Rules and Regulations. Frequencies used by channels one through six of 802.11b and 802.11g fall within the 2.4 GHz amateur radio band. Licensed amateur radio operators may operate 802.11b/g devices under Part 97 of the FCC Rules and Regulations, allowing increased power output but not commercial content or encryption.
Generations
In 2018, the Wi-Fi Alliance began using a consumer-friendly generation numbering scheme for the publicly used 802.11 protocols. Wi-Fi generations 1–6 refer to the 802.11b, 802.11a, 802.11g, 802.11n, 802.11ac, and 802.11ax protocols, in that order.
History
802.11 technology has its origins in a 1985 ruling by the U.S. Federal Communications Commission that released the ISM band for unlicensed use.
In 1991 NCR Corporation/AT&T (now Nokia Labs and LSI Corporation) invented a precursor to 802.11 in Nieuwegein, the Netherlands. The inventors initially intended to use the technology for cashier systems. The first wireless products were brought to the market under the name WaveLAN with raw data rates of 1 Mbit/s and 2 Mbit/s.
Vic Hayes, who held the chair of IEEE 802.11 for 10 years, and has been called the "father of Wi-Fi", was involved in designing the initial 802.11b and 802.11a standards within the IEEE. He, along with Bell Labs Engineer Bruce Tuch, approached IEEE to create a standard.
In 1999, the Wi-Fi Alliance was formed as a trade association to hold the Wi-Fi trademark under which most products are sold.
The major commercial breakthrough came with Apple's adopting Wi-Fi for their iBook series of laptops in 1999. It was the first mass consumer product to offer Wi-Fi network connectivity, which was then branded by Apple as AirPort. One year later IBM followed with its ThinkPad 1300 series in 2000.
Protocol
802.11-1997 (802.11 legacy)
The original version of the standard IEEE 802.11 was released in 1997 and clarified in 1999, but is now obsolete. It specified two net bit rates of 1 or 2 megabits per second (Mbit/s), plus forward error correction code. It specified three alternative physical layer technologies: diffuse infrared operating at 1 Mbit/s; frequency-hopping spread spectrum operating at 1 Mbit/s or 2 Mbit/s; and direct-sequence spread spectrum operating at 1 Mbit/s or 2 Mbit/s. The latter two radio technologies used microwave transmission over the Industrial Scientific Medical frequency band at 2.4 GHz. Some earlier WLAN technologies used lower frequencies, such as the U.S. 900 MHz ISM band.
Legacy 802.11 with direct-sequence spread spectrum was rapidly supplanted and popularized by 802.11b.
802.11a (OFDM waveform)
802.11a, published in 1999, uses the same data link layer protocol and frame format as the original standard, but an OFDM based air interface (physical layer) was added. It was later relabeled Wi-Fi 1, by the Wi-Fi Alliance, relative to Wi-Fi 2 (802.11b).
It operates in the 5 GHz band with a maximum net data rate of 54 Mbit/s, plus error correction code, which yields realistic net achievable throughput in the mid-20 Mbit/s. It has seen widespread worldwide implementation, particularly within the corporate workspace.
Since the 2.4 GHz band is heavily used to the point of being crowded, using the relatively unused 5 GHz band gives 802.11a a significant advantage. However, this high carrier frequency also brings a disadvantage: the effective overall range of 802.11a is less than that of 802.11b/g. In theory, 802.11a signals are absorbed more readily by walls and other solid objects in their path due to their smaller wavelength, and, as a result, cannot penetrate as far as those of 802.11b. In practice, 802.11b typically has a higher range at low speeds (802.11b will reduce speed to 5.5 Mbit/s or even 1 Mbit/s at low signal strengths). 802.11a also suffers from interference, but locally there may be fewer signals to interfere with, resulting in less interference and better throughput.
802.11b
The 802.11b standard has a maximum raw data rate of 11 Mbit/s (Megabits per second) and uses the same media access method defined in the original standard. 802.11b products appeared on the market in early 2000, since 802.11b is a direct extension of the modulation technique defined in the original standard. The dramatic increase in throughput of 802.11b (compared to the original standard) along with simultaneous substantial price reductions led to the rapid acceptance of 802.11b as the definitive wireless LAN technology.
Devices using 802.11b experience interference from other products operating in the 2.4 GHz band. Devices operating in the 2.4 GHz range include microwave ovens, Bluetooth devices, baby monitors, cordless telephones, and some amateur radio equipment. As unlicensed intentional radiators in this ISM band, they must not interfere with and must tolerate interference from primary or secondary allocations (users) of this band, such as amateur radio.
802.11g
In June 2003, a third modulation standard was ratified: 802.11g. This works in the 2.4 GHz band (like 802.11b), but uses the same OFDM based transmission scheme as 802.11a. It operates at a maximum physical layer bit rate of 54 Mbit/s exclusive of forward error correction codes, or about 22 Mbit/s average throughput. 802.11g hardware is fully backward compatible with 802.11b hardware, and therefore is encumbered with legacy issues that reduce throughput by ~21% when compared to 802.11a.
The then-proposed 802.11g standard was rapidly adopted in the market starting in January 2003, well before ratification, due to the desire for higher data rates as well as reductions in manufacturing costs. By summer 2003, most dual-band 802.11a/b products became dual-band/tri-mode, supporting a and b/g in a single mobile adapter card or access point. Details of making b and g work well together occupied much of the lingering technical process; in an 802.11g network, however, the activity of an 802.11b participant will reduce the data rate of the overall 802.11g network.
Like 802.11b, 802.11g devices also suffer interference from other products operating in the 2.4 GHz band, for example, wireless keyboards.
802.11-2007
In 2003, task group TGma was authorized to "roll up" many of the amendments to the 1999 version of the 802.11 standard. REVma or 802.11ma, as it was called, created a single document that merged 8 amendments (802.11a, b, d, e, g, h, i, j) with the base standard. Upon approval on 8 March 2007, 802.11REVma was renamed to the then-current base standard IEEE 802.11-2007.
802.11n
802.11n is an amendment that improves upon the previous 802.11 standards; its first draft of certification was published in 2006. The 802.11n standard was retroactively labelled as Wi-Fi 4 by the Wi-Fi Alliance. The standard added support for multiple-input multiple-output antennas (MIMO). 802.11n operates on both the 2.4 GHz and the 5 GHz bands. Support for 5 GHz bands is optional. Its net data rate ranges from 54 Mbit/s to 600 Mbit/s. The IEEE has approved the amendment, and it was published in October 2009. Prior to the final ratification, enterprises were already migrating to 802.11n networks based on the Wi-Fi Alliance's certification of products conforming to a 2007 draft of the 802.11n proposal.
802.11-2012
In May 2007, task group TGmb was authorized to "roll up" many of the amendments to the 2007 version of the 802.11 standard. REVmb or 802.11mb, as it was called, created a single document that merged ten amendments (802.11k, r, y, n, w, p, z, v, u, s) with the 2007 base standard. In addition much cleanup was done, including a reordering of many of the clauses. Upon publication on 29 March 2012, the new standard was referred to as IEEE 802.11-2012.
802.11ac
IEEE 802.11ac-2013 is an amendment to IEEE 802.11, published in December 2013, that builds on 802.11n. The 802.11ac standard was retroactively labelled as Wi-Fi 5 by the Wi-Fi Alliance. Changes compared to 802.11n include wider channels (80 or 160 MHz versus 40 MHz) in the 5 GHz band, more spatial streams (up to eight versus four), higher-order modulation (up to 256-QAM vs. 64-QAM), and the addition of Multi-user MIMO (MU-MIMO). The Wi-Fi Alliance separated the introduction of ac wireless products into two phases ("waves"), named "Wave 1" and "Wave 2". From mid-2013, the alliance started certifying Wave 1 802.11ac products shipped by manufacturers, based on the IEEE 802.11ac Draft 3.0 (the IEEE standard was not finalized until later that year). In 2016 Wi-Fi Alliance introduced the Wave 2 certification, to provide higher bandwidth and capacity than Wave 1 products. Wave 2 products include additional features like MU-MIMO, 160 MHz channel width support, support for more 5 GHz channels, and four spatial streams (with four antennas; compared to three in Wave 1 and 802.11n, and eight in IEEE's 802.11ax specification).
802.11ad
IEEE 802.11ad is an amendment that defines a new physical layer for 802.11 networks to operate in the 60 GHz millimeter wave spectrum. This frequency band has significantly different propagation characteristics than the 2.4 GHz and 5 GHz bands where Wi-Fi networks operate. Products implementing the 802.11ad standard are being brought to market under the WiGig brand name. The certification program is now being developed by the Wi-Fi Alliance instead of the now defunct Wireless Gigabit Alliance. The peak transmission rate of 802.11ad is 7 Gbit/s.
IEEE 802.11ad is a protocol used for very high data rates (about 8 Gbit/s) and for short range communication (about 1–10 meters).
TP-Link announced the world's first 802.11ad router in January 2016.
The WiGig standard is not too well known, although it was announced in 2009 and added to the IEEE 802.11 family in December 2012.
802.11af
IEEE 802.11af, also referred to as "White-Fi" and "Super Wi-Fi", is an amendment, approved in February 2014, that allows WLAN operation in TV white space spectrum in the VHF and UHF bands between 54 and 790 MHz. It uses cognitive radio technology to transmit on unused TV channels, with the standard taking measures to limit interference for primary users, such as analog TV, digital TV, and wireless microphones. Access points and stations determine their position using a satellite positioning system such as GPS, and use the Internet to query a geolocation database (GDB) provided by a regional regulatory agency to discover what frequency channels are available for use at a given time and position. The physical layer uses OFDM and is based on 802.11ac. The propagation path loss as well as the attenuation by materials such as brick and concrete is lower in the UHF and VHF bands than in the 2.4 GHz and 5 GHz bands, which increases the possible range. The frequency channels are 6 to 8 MHz wide, depending on the regulatory domain. Up to four channels may be bonded in either one or two contiguous blocks. MIMO operation is possible with up to four streams used for either space–time block code (STBC) or multi-user (MU) operation. The achievable data rate per spatial stream is 26.7 Mbit/s for 6 and 7 MHz channels, and 35.6 Mbit/s for 8 MHz channels. With four spatial streams and four bonded channels, the maximum data rate is 426.7 Mbit/s for 6 and 7 MHz channels and 568.9 Mbit/s for 8 MHz channels.
802.11-2016
IEEE 802.11-2016 which was known as IEEE 802.11 REVmc, is a revision based on IEEE 802.11-2012, incorporating 5 amendments (11ae, 11aa, 11ad, 11ac, 11af). In addition, existing MAC and PHY functions have been enhanced and obsolete features were removed or marked for removal. Some clauses and annexes have been renumbered.
802.11ah
IEEE 802.11ah, published in 2017, defines a WLAN system operating at sub-1 GHz license-exempt bands. Due to the favorable propagation characteristics of the low frequency spectra, 802.11ah can provide improved transmission range compared with the conventional 802.11 WLANs operating in the 2.4 GHz and 5 GHz bands. 802.11ah can be used for various purposes including large scale sensor networks, extended range hotspot, and outdoor Wi-Fi for cellular traffic offloading, whereas the available bandwidth is relatively narrow. The protocol intends consumption to be competitive with low power Bluetooth, at a much wider range.
802.11ai
IEEE 802.11ai is an amendment to the 802.11 standard that added new mechanisms for a faster initial link setup time.
802.11aj
IEEE 802.11aj is a derivative of 802.11ad for use in the 45 GHz unlicensed spectrum available in some regions of the world (specifically China); it also provides additional capabilities for use in the 60 GHz band.
Alternatively known as China Millimeter Wave (CMMW).
802.11aq
IEEE 802.11aq is an amendment to the 802.11 standard that will enable pre-association discovery of services. This extends some of the mechanisms in 802.11u that enabled device discovery to discover further the services running on a device, or provided by a network.
802.11-2020
IEEE 802.11-2020, which was known as IEEE 802.11 REVmd, is a revision based on IEEE 802.11-2016 incorporating 5 amendments (11ai, 11ah, 11aj, 11ak, 11aq). In addition, existing MAC and PHY functions have been enhanced and obsolete features were removed or marked for removal. Some clauses and annexes have been added.
802.11ax
IEEE 802.11ax is the successor to 802.11ac, marketed as (2.4 GHz and 5 GHz) and (6 GHz) by the Wi-Fi Alliance. It is also known as High Efficiency , for the overall improvements to clients under dense environments. For an individual client, the maximum improvement in data rate (PHY speed) against the predecessor (802.11ac) is only 39% (for comparison, this improvement was nearly 500% for the predecessors). Yet, even with this comparatively minor 39% figure, the goal was to provide 4 times the throughput-per-area of 802.11ac (hence High Efficiency). The motivation behind this goal was the deployment of WLAN in dense environments such as corporate offices, shopping malls and dense residential apartments. This is achieved by means of a technique called OFDMA, which is basically multiplexing in the frequency domain (as opposed to spatial multiplexing, as in 802.11ac). This is equivalent to cellular technology applied into .
The IEEE 802.11ax2021 standard was approved on February 9, 2021.
802.11ay
IEEE 802.11ay is a standard that is being developed, also called EDMG: Enhanced Directional MultiGigabit PHY. It is an amendment that defines a new physical layer for 802.11 networks to operate in the 60 GHz millimeter wave spectrum. It will be an extension of the existing 11ad, aimed to extend the throughput, range, and use-cases. The main use-cases include indoor operation and short-range communications due to atmospheric oxygen absorption and inability to penetrate walls. The peak transmission rate of 802.11ay is 40 Gbit/s. The main extensions include: channel bonding (2, 3 and 4), MIMO (up to 4 streams) and higher modulation schemes. The expected range is 300-500 m.
802.11ba
IEEE 802.11ba Wake-up Radio (WUR) Operation is an amendment to the IEEE 802.11 standard that enables energy efficient operation for data reception without increasing latency. The target active power consumption to receive a WUR packet is less than 1 milliwatt and supports data rates of 62.5 kbit/s and 250 kbit/s. The WUR PHY uses MC-OOK (multicarrier OOK) to achieve extremely low power consumption.
802.11be
IEEE 802.11be Extremely High Throughput (EHT) is the potential next amendment to the 802.11 IEEE standard, and will likely be designated as Wi-Fi 7. It will build upon 802.11ax, focusing on WLAN indoor and outdoor operation with stationary and pedestrian speeds in the 2.4 GHz, 5 GHz, and 6 GHz frequency bands.
Common misunderstandings about achievable throughput
Across all variations of 802.11, maximum achievable throughputs are given either based on measurements under ideal conditions or in the layer-2 data rates. However, this does not apply to typical deployments in which data is being transferred between two endpoints, of which at least one is typically connected to a wired infrastructure and the other endpoint is connected to an infrastructure via a wireless link.
This means that, typically, data frames pass an 802.11 (WLAN) medium and are being converted to 802.3 (Ethernet) or vice versa. Due to the difference in the frame (header) lengths of these two media, the application's packet size determines the speed of the data transfer. This means applications that use small packets (e.g., VoIP) create dataflows with high-overhead traffic (i.e., a low goodput). Other factors that contribute to the overall application data rate are the speed with which the application transmits the packets (i.e., the data rate) and, of course, the energy with which the wireless signal is received. The latter is determined by distance and by the configured output power of the communicating devices.
The same references apply to the attached graphs that show measurements of UDP throughput. Each represents an average (UDP) throughput (please note that the error bars are there but barely visible due to the small variation) of 25 measurements. Each is with a specific packet size (small or large) and with a specific data rate (10 kbit/s – 100 Mbit/s). Markers for traffic profiles of common applications are included as well. These figures assume there are no packet errors, which, if occurring, will lower the transmission rate further.
Channels and frequencies
802.11b, 802.11g, and 802.11n-2.4 utilize the spectrum, one of the ISM bands. 802.11a, 802.11n, and 802.11ac use the more heavily regulated band. These are commonly referred to as the "2.4 GHz and 5 GHz bands" in most sales literature. Each spectrum is sub-divided into channels with a center frequency and bandwidth, analogous to how radio and TV broadcast bands are sub-divided.
The 2.4 GHz band is divided into 14 channels spaced 5 MHz apart, beginning with channel 1, which is centered on 2.412 GHz. The latter channels have additional restrictions or are unavailable for use in some regulatory domains.
The channel numbering of the spectrum is less intuitive due to the differences in regulations between countries. These are discussed in greater detail on the list of WLAN channels.
Channel spacing within the 2.4 GHz band
In addition to specifying the channel center frequency, 802.11 also specifies (in Clause 17) a spectral mask defining the permitted power distribution across each channel. The mask requires the signal to be attenuated a minimum of 20 dB from its peak amplitude at ±11 MHz from the center frequency, the point at which a channel is effectively 22 MHz wide. One consequence is that stations can use only every fourth or fifth channel without overlap.
Availability of channels is regulated by country, constrained in part by how each country allocates radio spectrum to various services. At one extreme, Japan permits the use of all 14 channels for 802.11b, and for 802.11g/n-2.4. Other countries such as Spain initially allowed only channels 10 and 11, and France allowed only 10, 11, 12, and 13; however, Europe now allow channels 1 through 13. North America and some Central and South American countries allow only
Since the spectral mask defines only power output restrictions up to ±11 MHz from the center frequency to be attenuated by −50 dBr, it is often assumed that the energy of the channel extends no further than these limits. It is more correct to say that the overlapping signal on any channel should be sufficiently attenuated to interfere with a transmitter on any other channel minimally, given the separation between channels. Due to the near–far problem a transmitter can impact (desensitize) a receiver on a "non-overlapping" channel, but only if it is close to the victim receiver (within a meter) or operating above allowed power levels. Conversely, a sufficiently distant transmitter on an overlapping channel can have little to no significant effect.
Confusion often arises over the amount of channel separation required between transmitting devices. 802.11b was based on direct-sequence spread spectrum (DSSS) modulation and utilized a channel bandwidth of 22 MHz, resulting in three "non-overlapping" channels (1, 6, and 11). 802.11g was based on OFDM modulation and utilized a channel bandwidth of 20 MHz. This occasionally leads to the belief that four "non-overlapping" channels (1, 5, 9, and 13) exist under 802.11g. However, this is not the case as per 17.4.6.3 Channel Numbering of operating channels of the IEEE Std 802.11 (2012), which states, "In a multiple cell network topology, overlapping and/or adjacent cells using different channels can operate simultaneously without interference if the distance between the center frequencies is at least 25 MHz."
and section 18.3.9.3 and Figure 18-13.
This does not mean that the technical overlap of the channels recommends the non-use of overlapping channels. The amount of inter-channel interference seen on a configuration using channels 1, 5, 9, and 13 (which is permitted in Europe, but not in North America) is barely different from a three-channel configuration, but with an entire extra channel.
However, overlap between channels with more narrow spacing (e.g. 1, 4, 7, 11 in North America) may cause unacceptable degradation of signal quality and throughput, particularly when users transmit near the boundaries of AP cells.
Regulatory domains and legal compliance
IEEE uses the phrase regdomain to refer to a legal regulatory region. Different countries define different levels of allowable transmitter power, time that a channel can be occupied, and different available channels. Domain codes are specified for the United States, Canada, ETSI (Europe), Spain, France, Japan, and China.
Most Wi-Fi certified devices default to regdomain 0, which means least common denominator settings, i.e., the device will not transmit at a power above the allowable power in any nation, nor will it use frequencies that are not permitted in any nation.
The regdomain setting is often made difficult or impossible to change so that the end-users do not conflict with local regulatory agencies such as the United States' Federal Communications Commission.
Layer 2 – Datagrams
The datagrams are called frames. Current 802.11 standards specify frame types for use in the transmission of data as well as management and control of wireless links.
Frames are divided into very specific and standardized sections. Each frame consists of a MAC header, payload, and frame check sequence (FCS). Some frames may not have a payload.
The first two bytes of the MAC header form a frame control field specifying the form and function of the frame. This frame control field is subdivided into the following sub-fields:
Protocol Version: Two bits representing the protocol version. The currently used protocol version is zero. Other values are reserved for future use.
Type: Two bits identifying the type of WLAN frame. Control, Data, and Management are various frame types defined in IEEE 802.11.
Subtype: Four bits providing additional discrimination between frames. Type and Subtype are used together to identify the exact frame.
ToDS and FromDS: Each is one bit in size. They indicate whether a data frame is headed for a distribution system. Control and management frames set these values to zero. All the data frames will have one of these bits set. However, communication within an independent basic service set (IBSS) network always sets these bits to zero.
More Fragments: The More Fragments bit is set when a packet is divided into multiple frames for transmission. Every frame except the last frame of a packet will have this bit set.
Retry: Sometimes frames require retransmission, and for this, there is a Retry bit that is set to one when a frame is resent. This aids in the elimination of duplicate frames.
Power Management: This bit indicates the power management state of the sender after the completion of a frame exchange. Access points are required to manage the connection and will never set the power-saver bit.
More Data: The More Data bit is used to buffer frames received in a distributed system. The access point uses this bit to facilitate stations in power-saver mode. It indicates that at least one frame is available and addresses all stations connected.
Protected Frame: The Protected Frame bit is set to the value of one if the frame body is encrypted by a protection mechanism such as Wired Equivalent Privacy (WEP), Wi-Fi Protected Access (WPA), or Wi-Fi Protected Access II (WPA2).
Order: This bit is set only when the "strict ordering" delivery method is employed. Frames and fragments are not always sent in order as it causes a transmission performance penalty.
The next two bytes are reserved for the Duration ID field, indicating how long the field's transmission will take so other devices know when the channel will be available again. This field can take one of three forms: Duration, Contention-Free Period (CFP), and Association ID (AID).
An 802.11 frame can have up to four address fields. Each field can carry a MAC address. Address 1 is the receiver, Address 2 is the transmitter, Address 3 is used for filtering purposes by the receiver. Address 4 is only present in data frames transmitted between access points in an Extended Service Set or between intermediate nodes in a mesh network.
The remaining fields of the header are:
The Sequence Control field is a two-byte section used to identify message order and eliminate duplicate frames. The first 4 bits are used for the fragmentation number, and the last 12 bits are the sequence number.
An optional two-byte Quality of Service control field, present in QoS Data frames; it was added with 802.11e.
The payload or frame body field is variable in size, from 0 to 2304 bytes plus any overhead from security encapsulation, and contains information from higher layers.
The Frame Check Sequence (FCS) is the last four bytes in the standard 802.11 frame. Often referred to as the Cyclic Redundancy Check (CRC), it allows for integrity checks of retrieved frames. As frames are about to be sent, the FCS is calculated and appended. When a station receives a frame, it can calculate the FCS of the frame and compare it to the one received. If they match, it is assumed that the frame was not distorted during transmission.
Management frames
Management frames are not always authenticated, and allow for the maintenance, or discontinuance, of communication. Some common 802.11 subtypes include:
Authentication frame: 802.11 authentication begins with the wireless network interface card (WNIC) sending an authentication frame to the access point containing its identity.
When open system authentication is being used, the WNIC sends only a single authentication frame, and the access point responds with an authentication frame of its own indicating acceptance or rejection.
When shared key authentication is being used, the WNIC sends an initial authentication request, and the access point responds with an authentication frame containing challenge text. The WNIC then sends an authentication frame containing the encrypted version of the challenge text to the access point. The access point ensures the text was encrypted with the correct key by decrypting it with its own key. The result of this process determines the WNIC's authentication status.
Association request frame: Sent from a station, it enables the access point to allocate resources and synchronize. The frame carries information about the WNIC, including supported data rates and the SSID of the network the station wishes to associate with. If the request is accepted, the access point reserves memory and establishes an association ID for the WNIC.
Association response frame: Sent from an access point to a station containing the acceptance or rejection to an association request. If it is an acceptance, the frame will contain information such as an association ID and supported data rates.
Beacon frame: Sent periodically from an access point to announce its presence and provide the SSID, and other parameters for WNICs within range.
: Sent from a station wishing to terminate connection from another station.
Disassociation frame: Sent from a station wishing to terminate the connection. It is an elegant way to allow the access point to relinquish memory allocation and remove the WNIC from the association table.
Probe request frame: Sent from a station when it requires information from another station.
Probe response frame: Sent from an access point containing capability information, supported data rates, etc., after receiving a probe request frame.
Reassociation request frame: A WNIC sends a reassociation request when it drops from the currently associated access point range and finds another access point with a stronger signal. The new access point coordinates the forwarding of any information that may still be contained in the buffer of the previous access point.
Reassociation response frame: Sent from an access point containing the acceptance or rejection to a WNIC reassociation request frame. The frame includes information required for association such as the association ID and supported data rates.
Action frame: extending management frame to control a certain action. Some of the action categories are Block Ack, Radio Measurement, Fast BSS Transition, etc. These frames are sent by a station when it needs to tell its peer for a certain action to be taken. For example, a station can tell another station to set up a block acknowledgement by sending an ADDBA Request action frame. The other station would then respond with an ADDBA Response action frame.
The body of a management frame consists of frame-subtype-dependent fixed fields followed by a sequence of information elements (IEs).
The common structure of an IE is as follows:
Control frames
Control frames facilitate the exchange of data frames between stations. Some common 802.11 control frames include:
Acknowledgement (ACK) frame: After receiving a data frame, the receiving station will send an ACK frame to the sending station if no errors are found. If the sending station doesn't receive an ACK frame within a predetermined period of time, the sending station will resend the frame.
Request to Send (RTS) frame: The RTS and CTS frames provide an optional collision reduction scheme for access points with hidden stations. A station sends an RTS frame as the first step in a two-way handshake required before sending data frames.
Clear to Send (CTS) frame: A station responds to an RTS frame with a CTS frame. It provides clearance for the requesting station to send a data frame. The CTS provides collision control management by including a time value for which all other stations are to hold off transmission while the requesting station transmits.
Data frames
Data frames carry packets from web pages, files, etc. within the body. The body begins with an IEEE 802.2 header, with the Destination Service Access Point (DSAP) specifying the protocol, followed by a Subnetwork Access Protocol (SNAP) header if the DSAP is hex AA, with the organizationally unique identifier (OUI) and protocol ID (PID) fields specifying the protocol. If the OUI is all zeroes, the protocol ID field is an EtherType value. Almost all 802.11 data frames use 802.2 and SNAP headers, and most use an OUI of 00:00:00 and an EtherType value.
Similar to TCP congestion control on the internet, frame loss is built into the operation of 802.11. To select the correct transmission speed or Modulation and Coding Scheme, a rate control algorithm may test different speeds. The actual packet loss rate of Access points varies widely for different link conditions. There are variations in the loss rate experienced on production Access points, between 10% and 80%, with 30% being a common average. It is important to be aware that the link layer should recover these lost frames. If the sender does not receive an Acknowledgement (ACK) frame, then it will be resent.
Standards and amendments
Within the IEEE 802.11 Working Group, the following IEEE Standards Association Standard and Amendments exist:
IEEE 802.11-1997: The WLAN standard was originally 1 Mbit/s and 2 Mbit/s, 2.4 GHz RF and infrared (IR) standard (1997), all the others listed below are Amendments to this standard, except for Recommended Practices 802.11F and 802.11T.
IEEE 802.11a: 54 Mbit/s, 5 GHz standard (1999, shipping products in 2001)
IEEE 802.11b: 5.5 Mbit/s and 11 Mbit/s, 2.4 GHz standard (1999)
IEEE 802.11c: Bridge operation procedures; included in the IEEE 802.1D standard (2001)
IEEE 802.11d: International (country-to-country) roaming extensions (2001)
IEEE 802.11e: Enhancements: QoS, including packet bursting (2005)
IEEE 802.11F: Inter-Access Point Protocol (2003) Withdrawn February 2006
IEEE 802.11g: 54 Mbit/s, 2.4 GHz standard (backwards compatible with b) (2003)
IEEE 802.11h: Spectrum Managed 802.11a (5 GHz) for European compatibility (2004)
IEEE 802.11i: Enhanced security (2004)
IEEE 802.11j: Extensions for Japan (4.9-5.0 GHz) (2004)
IEEE 802.11-2007: A new release of the standard that includes amendments a, b, d, e, g, h, i, and j. (July 2007)
IEEE 802.11k: Radio resource measurement enhancements (2008)
IEEE 802.11n: Higher Throughput WLAN at 2.4 and 5 GHz; 20 and 40 MHz channels; introduces MIMO to (September 2009)
IEEE 802.11p: WAVE—Wireless Access for the Vehicular Environment (such as ambulances and passenger cars) (July 2010)
IEEE 802.11r: Fast BSS transition (FT) (2008)
IEEE 802.11s: Mesh Networking, Extended Service Set (ESS) (July 2011)
IEEE 802.11T: Wireless Performance Prediction (WPP)—test methods and metrics Recommendation cancelled
IEEE 802.11u: Improvements related to HotSpots and 3rd-party authorization of clients, e.g., cellular network offload (February 2011)
IEEE 802.11v: Wireless network management (February 2011)
IEEE 802.11w: Protected Management Frames (September 2009)
IEEE 802.11y: 3650–3700 MHz Operation in the U.S. (2008)
IEEE 802.11z: Extensions to Direct Link Setup (DLS) (September 2010)
IEEE 802.11-2012: A new release of the standard that includes amendments k, n, p, r, s, u, v, w, y, and z (March 2012)
IEEE 802.11aa: Robust streaming of Audio Video Transport Streams (June 2012) - see Stream Reservation Protocol
IEEE 802.11ac: Very High Throughput WLAN at 5 GHz; wider channels (80 and 160 MHz); Multi-user MIMO (down-link only) (December 2013)
IEEE 802.11ad: Very High Throughput 60 GHz (December 2012) — see WiGig
IEEE 802.11ae: Prioritization of Management Frames (March 2012)
IEEE 802.11af: TV Whitespace (February 2014)
IEEE 802.11-2016: A new release of the standard that includes amendments aa, ac, ad, ae, and af (December 2016)
IEEE 802.11ah: Sub-1 GHz license exempt operation (e.g., sensor network, smart metering) (December 2016)
IEEE 802.11ai: Fast Initial Link Setup (December 2016)
IEEE 802.11aj: China Millimeter Wave (February 2018)
IEEE 802.11ak: Transit Links within Bridged Networks (June 2018)
IEEE 802.11aq: Pre-association Discovery (July 2018)
IEEE 802.11-2020: A new release of the standard that includes amendments ah, ai, aj, ak, and aq (December 2020)
IEEE 802.11ax: High Efficiency WLAN at 2.4, 5 and 6 GHz; introduces OFDMA to (February 2021)
IEEE 802.11ay: Enhancements for Ultra High Throughput in and around the 60 GHz Band (March 2021)
IEEE 802.11ba: Wake Up Radio (March 2021)
In process
IEEE 802.11az: Next Generation Positioning (~ March 2021 for .11az final)
IEEE 802.11bb: Light Communications
IEEE 802.11bc: Enhanced Broadcast Service
IEEE 802.11bd: Enhancements for Next Generation V2X
IEEE 802.11be: Extremely High Throughput
IEEE 802.11bf: WLAN Sensing
IEEE 802.11bh: Randomized and Changing MAC Addresses
IEEE 802.11me: 802.11 Accumulated Maintenance Changes
IEEE 802.11bi: Enhanced Data Privacy
802.11F and 802.11T are recommended practices rather than standards and are capitalized as such.
802.11m is used for standard maintenance. 802.11ma was completed for 802.11-2007, 802.11mb for 802.11-2012, 802.11mc for 802.11-2016, and 802.11md for 802.11-2020.
Standard vs. amendment
Both the terms "standard" and "amendment" are used when referring to the different variants of IEEE standards.
As far as the IEEE Standards Association is concerned, there is only one current standard; it is denoted by IEEE 802.11 followed by the date published. IEEE 802.11-2020 is the only version currently in publication, superseding previous releases. The standard is updated by means of amendments. Amendments are created by task groups (TG). Both the task group and their finished document are denoted by 802.11 followed by a non-capitalized letter, for example, IEEE 802.11a and IEEE 802.11b. Updating 802.11 is the responsibility of task group m. In order to create a new version, TGm combines the previous version of the standard and all published amendments. TGm also provides clarification and interpretation to industry on published documents. New versions of the IEEE 802.11 were published in 1999, 2007, 2012, 2016, and 2020.
Nomenclature
Various terms in 802.11 are used to specify aspects of wireless local-area networking operation and may be unfamiliar to some readers.
For example, Time Unit (usually abbreviated TU) is used to indicate a unit of time equal to 1024 microseconds. Numerous time constants are defined in terms of TU (rather than the nearly equal millisecond).
Also, the term "Portal" is used to describe an entity that is similar to an 802.1H bridge. A Portal provides access to the WLAN by non-802.11 LAN STAs.
Security
In 2001, a group from the University of California, Berkeley presented a paper describing weaknesses in the 802.11 Wired Equivalent Privacy (WEP) security mechanism defined in the original standard; they were followed by Fluhrer, Mantin, and Shamir's paper titled "Weaknesses in the Key Scheduling Algorithm of RC4". Not long after, Adam Stubblefield and AT&T publicly announced the first verification of the attack. In the attack, they were able to intercept transmissions and gain unauthorized access to wireless networks.
The IEEE set up a dedicated task group to create a replacement security solution, 802.11i (previously, this work was handled as part of a broader 802.11e effort to enhance the MAC layer). The Wi-Fi Alliance announced an interim specification called Wi-Fi Protected Access (WPA) based on a subset of the then-current IEEE 802.11i draft. These started to appear in products in mid-2003. IEEE 802.11i (also known as WPA2) itself was ratified in June 2004, and uses the Advanced Encryption Standard (AES), instead of RC4, which was used in WEP. The modern recommended encryption for the home/consumer space is WPA2 (AES Pre-Shared Key), and for the enterprise space is WPA2 along with a RADIUS authentication server (or another type of authentication server) and a strong authentication method such as EAP-TLS.
In January 2005, the IEEE set up yet another task group "w" to protect management and broadcast frames, which previously were sent unsecured. Its standard was published in 2009.
In December 2011, a security flaw was revealed that affects some wireless routers with a specific implementation of the optional Wi-Fi Protected Setup (WPS) feature. While WPS is not a part of 802.11, the flaw allows an attacker within the range of the wireless router to recover the WPS PIN and, with it, the router's 802.11i password in a few hours.
In late 2014, Apple announced that its iOS 8 mobile operating system would scramble MAC addresses during the pre-association stage to thwart retail footfall tracking made possible by the regular transmission of uniquely identifiable probe requests.
Wi-Fi users may be subjected to a Wi-Fi deauthentication attack to eavesdrop, attack passwords, or force the use of another, usually more expensive access point.
See also
802.11 Frame Types
Comparison of wireless data standards
Fujitsu Ltd. v. Netgear Inc.
Gi-Fi, a term used by some trade press to refer to faster versions of the IEEE 802.11 standards
LTE-WLAN Aggregation
OFDM system comparison table
TU (time unit)
TV White Space Database
Ultra-wideband
White spaces (radio)
Wi-Fi operating system support
Wibree or Bluetooth low energy
WiGig
Wireless USB – another wireless protocol primarily designed for shorter-range applications
Notes
Footnotes
References
External links
IEEE 802.11 working group
Official timelines of 802.11 standards from IEEE
List of all Wi-Fi Chipset Vendors – Including historical timeline of mergers and acquisitions
Computer-related introductions in 1997
Wireless networking standards
Local area networks |
14742 | https://en.wikipedia.org/wiki/Internet%20Standard | Internet Standard | An Internet Standard in computer network engineering refers to the normative specification of a technology that is appropriate for the Internet. Internet Standards allow interoperation of hardware and software from different sources which allows the internet to function. They are the lingua franca of worldwide communications.
In computer network engineering, an Internet Standard is a normative specification of a technology or methodology applicable to the Internet. Internet Standards are created and published by the Internet Engineering Task Force (IETF).
Engineering contributions to the IETF start as an Internet Draft, may be promoted to a Request for Comments, and may eventually become an Internet Standard.
An Internet Standard is characterized by technical maturity and usefulness. The IETF also defines a Proposed Standard as a less mature but stable and well-reviewed specification. A Draft Standard is a third classification that was discontinued in 2011. A Draft Standard was an intermediary step that occurred after a Proposed Standard but prior to an Internet Standard.
As put in RFC 2026: In general, an Internet Standard is a specification that is stable and well-understood, is technically competent, has multiple, independent, and interoperable implementations with substantial operational experience, enjoys significant public support, and is recognizably useful in some or all parts of the Internet.
Overview
An Internet Standard is documented by a Request for Comments (RFC) or a set of RFCs. A specification that is to become a Standard or part of a Standard begins as an Internet Draft, and is later, usually after several revisions, accepted and published by the RFC Editor as an RFC and labeled a Proposed Standard. Later, an RFC is elevated as Internet Standard, with an additional sequence number, when maturity has reached an acceptable level. Collectively, these stages are known as the Standards Track, and are defined in RFC 2026 and RFC 6410. The label Historic is applied to deprecated Standards Track documents or obsolete RFCs that were published before the Standards Track was established.
Only the IETF, represented by the Internet Engineering Steering Group (IESG), can approve Standards Track RFCs. The definitive list of Internet Standards is maintained in the Official Internet Protocol Standards. Previously, STD 1 used to maintain a snapshot of the list.
History & The Purpose of Internet Standards
Internet standard is a set of rules that the devices have to follow when they connect in a network. Since the technology has evolved, the rules of the engagement between computers had to evolve with it. These are the protocols that are in place used today. Most of these were developed long before the Internet Age, going as far back as the 1970s, not long after the creation of personal computers.
TCP/IP
The official date for when the first internet went live is January 1, 1983. The Transfer Control Protocol/Internet Protocol (TCP/IP) went into effect. ARPANET(Advanced Research Projects Agency Network) and the Defense Data Network were the networks to implement the Protocols. These protocols are considered to be the essential part of how the Internet works because they define the rules by which the connections between servers operate. They are still used today by implementing various ways data is sent via global networks.
IPsec
Internet Protocol Security is a collection of protocols that ensure the integrity of encryption in the connection between multiple devices. The purpose of this protocol is to protect public networks. According to IETF Datatracker the group dedicated to its creation was proposed into existence on 25 November 1992. Half a year later the group was created and not long after in the mid 1993 the first draft was published.
HTTP
HyperText Transfer Protocol is one of the most commonly used protocols today in the context of the World Wide Web. HTTP is a simple protocol to govern how documents, that are written in HyperText Mark Language(HTML), are exchanged via networks. This protocol is the backbone of the Web allowing for the whole hypertext system to exist practically. It was created by the team of developers spearheaded by Tim Berners-Lee. Berners-Lee is responsible for the proposal of its creation, which he did in 1989. August 6, 1991 is the date he published the first complete version of HTTP on a public forum. This date subsequently is considered by some to be the official birth of the World Wide Web. HPPS has been continually evolving since its creation, becoming more complicated with time and progression of networking technology. By default HTTP is not encrypted so in practice HTTPS is used, which stands for HTTPS Secure.
TLS/SSL
TLS stands for Transport Layer Security is a standard that enables two different endpoints to interconnect sturdy and privately. TLS came as a replacement for SSL. Secure Sockets Layers was first introduced before the creation of HTTPS and it was created by Netscape. As a matter of fact HTTPS was based on SSL when it first came out. It was apparent that one common way of encrypting data was needed so the IETF specified TLS 1.0 in RFC 2246 in January, 1999. It has been upgraded since. Last version of TLS is 1.3 from RFC 8446 in August 2018.
OSI Model
The Open Systems Interconnection model began its development in 1977. It was created by the International Organization for Standardization. It was officially published and adopted as a standard for use in 1979. It was then updated several times and the final version. It took a few years for the protocol to be presented in its final form. ISO 7498 was published in 1984. Lastly in 1995 the OSI model was revised again satisfy the urgent needs of uprising development in the field of computer network
UDP
The goal of User Datagram Protocol was to find a way to communicate between two computers as quickly and efficiently as possible. Essentially UDP was conceived and realized by David P. Reed in 1980. Essentially the way it works is using compression to send information. Data would be compressed into a datagram and sent point to point. This proved to be a secure way to transmit information and despite the drawback of losing quality of data UDP is still in use.
Standardization process
Becoming a standard is a two-step process within the Internet Standards Process: Proposed Standard and Internet Standard. These are called maturity levels and the process is called the Standards Track.
If an RFC is part of a proposal that is on the Standards Track, then at the first stage, the standard is proposed and subsequently organizations decide whether to implement this Proposed Standard. After the criteria in RFC 6410 is met (two separate implementations, widespread use, no errata etc.), the RFC can advance to Internet Standard.
The Internet Standards Process is defined in several "Best Current Practice" documents, notably BCP 9 ( RFC 2026 and RFC 6410). There were previously three standard maturity levels: Proposed Standard, Draft Standard and Internet Standard. RFC 6410 reduced this to two maturity levels.
Proposed Standard
RFC 2026 originally characterized Proposed Standards as immature specifications, but this stance was annulled by RFC 7127.
A Proposed Standard specification is stable, has resolved known design choices, has received significant community review, and appears to enjoy enough community interest to be considered valuable. Usually, neither implementation nor operational experience is required for the designation of a specification as a Proposed Standard.
Proposed Standards are of such quality that implementations can be deployed in the Internet. However, as with all technical specifications, Proposed Standards may be revised if problems are found or better solutions are identified, when experiences with deploying implementations of such technologies at scale is gathered.
Many Proposed Standards are actually deployed on the Internet and used extensively, as stable protocols. Actual practice has been that full progression through the sequence of standards levels is typically quite rare, and most popular IETF protocols remain at Proposed Standard.
Draft Standard
In October 2011, RFC 6410 merged the second and third maturity levels into one Draft Standard. Existing older Draft Standards retain that classification. The IESG can reclassify an old Draft Standard as Proposed Standard after two years (October 2013).
Internet Standard
An Internet Standard is characterized by a high degree of technical maturity and by a generally held belief that the specified protocol or service provides significant benefit to the Internet community. Generally Internet Standards cover interoperability of systems on the Internet through defining protocols, message formats, schemas, and languages. The most fundamental of the Internet Standards are the ones defining the Internet Protocol.
An Internet Standard ensures that hardware and software produced by different vendors can work together. Having a standard makes it much easier to develop software and hardware that link different networks because software and hardware can be developed one layer at a time. Normally, the standards used in data communication are called protocols.
All Internet Standards are given a number in the STD series. The series was summarized in its first document, STD 1 (RFC 5000), until 2013, but this practice was retired in RFC 7100. The definitive list of Internet Standards is now maintained by the RFC Editor.
Documents submitted to the IETF editor and accepted as an RFC are not revised; if the document has to be changed, it is submitted again and assigned a new RFC number. When an RFC becomes an Internet Standard (STD), it is assigned an STD number but retains its RFC number. When an Internet Standard is updated, its number is unchanged but refers to a different RFC or set of RFCs. For example, in 2007 RFC 3700 was an Internet Standard (STD 1) and in May 2008 it was replaced with RFC 5000. RFC 3700 received Historic status, and RFC 5000 became STD 1.
The list of Internet standards was originally published as STD 1 but this practice has been abandoned in favor of an online list maintained by the RFC Editor.
Organizations of Internet Standards
The standardization process is divided into three steps:
Proposed standards are standards to be implemented and can be changed at any time
The draft standard was carefully tested in preparation for riverside to form the future Internet standard
Internet standards are mature standards.
There are five Internet standards organizations: the Internet Engineering Task Force (IETF), Internet Society (ISOC), Internet Architecture Board (IAB), Internet Research Task Force (IRTF), World Wide Web Consortium (W3C). All organizations are required to use and express the Internet language in order to remain competitive in the current Internet phase. Some basic aims of the Internet Standards Process are; ensure technical excellence; earlier implementation and testing; perfect, succinct as well as easily understood records.
Creating and improving the Internet Standards is an ongoing effort and Internet Engineering Task Force plays a significant role in this regard. These standards are shaped and available by the Internet Engineering Task Force (IETF). It is the leading Internet standards association that uses well-documented procedures for creating these standards. Once circulated, those standards are made easily accessible without any cost.
Till 1993, the United States federal government was supporting the IETF. Now, the Internet Society's Internet Architecture Board (IAB) supervises it. It is a bottom-up organization that has no formal necessities for affiliation and does not have an official membership procedure either. It watchfully works with the World Wide Web Consortium (W3C) and other standard development organizations. Moreover, it heavily relies on working groups that are constituted and proposed to an Area Director. IETF relies on its working groups for expansion of IETF conditions and strategies with a goal to make the Internet work superior. The working group then operates under the direction of the Area Director and progress an agreement. After the circulation of the proposed charter to the IESG and IAB mailing lists and its approval then it is further forwarded to the public IETF. It is not essential to have the complete agreement of all working groups and adopt the proposal. IETF working groups are only required to recourse to check if the accord is strong.
Likewise, the Working Group produce documents in the arrangement of RFCs which are memorandum containing approaches, deeds, examination as well as innovations suitable to the functioning of the Internet and Internet-linked arrangements. In other words, Requests for Comments (RFCs) are primarily used to mature a standard network protocol that is correlated with network statements. Some RFCs are aimed to produce information while others are required to publish Internet standards. The ultimate form of the RFC converts to the standard and is issued with a numeral. After that, no more comments or variations are acceptable for the concluding form. This process is followed in every area to generate unanimous views about a problem related to the internet and develop internet standards as a solution to different glitches. There are eight common areas on which IETF focus and uses various working groups along with an area director. In the "general" area it works and develops the Internet standards. In "Application" area it concentrates on internet applications such as Web-related protocols. Furthermore, it also works on the development of internet infrastructure in the form of PPP extensions. IETF also establish principles and descriptions for network processes such as remote network observing. For example, IETF emphasis the enlargement of technical standards that encompass the Internet protocol suite (TCP/IP). The Internet Architecture Board (IAB) along with the Internet Research Task Force (IRTF) counterpart the exertion of the IETF using innovative technologies.
The IETF is the standards making organization concentrate on the generation of "standard" stipulations of expertise and their envisioned usage. The IETF concentrates on matters associated with the progress of current Internet and TCP/IP know-how. It is alienated into numerous working groups (WGs), every one of which is accountable for evolving standards and skills in a specific zone, for example routing or security. People in working groups are volunteers and work in fields such as equipment vendors, network operators and different research institutions. Firstly, it works on getting the common consideration of the necessities that the effort should discourse. Then an IETF Working Group is formed and necessities are ventilated in the influential Birds of a Feather (BoF) assemblies at IETF conferences.
Internet Engineering Task Force
The Internet Engineering Task Force (IETF) is the premier internet standards organization. It follows an open and well-documented processes for setting internet standards. The resources that the IETF offers include RFCs, internet-drafts, IANA functions, intellectual property rights, standards process, and publishing and accessing RFCs.
RFCs
Documents that contain technical specifications and notes for the Internet.
The acronym RFC came from the phrase "Request For Comments" - this isn't used anymore today and is now simply referred to as RFCs.
The website RFC Editor is an official archive of internet standards, draft standards, and proposed standards.
Internet Drafts
Working documents of the IETF and its working groups.
Other groups may distribute working documents as Internet-Drafts
Intellectual property rights
All IETF standards are freely available to view and read, and generally free to implement by anyone without permission or payment.
Standards Process
The process of creating a standard is straightforward - a specification goes through an extensive review process by the Internet community and revised through experience.
Publishing and accessing RFCs
Internet-Drafts that successfully completed the review process.
Submitted to RFC editor for publication.
Types of Internet Standards
There are two ways in which an Internet Standard is formed and can be categorized as one of the following: "de jure" standards and "de facto" standards. A de facto standard becomes a standard through widespread use within the tech community. A de jure standard is formally created by official standard-developing organizations. These standards undergo the Internet Standards Process. Common de jure standards include ASCII, SCSI, and Internet protocol suite.
Internet Standard Specifications
Specifications subject to the Internet Standards Process can be categorized into one of the following: Technical Specification (TS) and Applicability Statement (AS). A Technical Specification is a statement describing all relevant aspects of a protocol, service, procedure, convention, or format. This includes its scope and its intent for use, or "domain of applicability". However, a TSs use within the Internet is defined by an Applicability Statement. An AS specifies how, and under what circumstances, TSs may be applied to support a particular Internet capability. An AS identifies the ways in which relevant TSs are combined and specifies the parameters or sub-functions of TS protocols. An AS also describes the domains of applicability of TSs, such as Internet routers, terminal server, or datagram-based database servers. An AS also applies one of the following "requirement levels" to each of the TSs to which it refers:
Required: Implementation of the referenced TS is required to achieve interoperability. For example, Internet systems using the Internet Protocol Suite are required to implement IP and ICMP.
Recommended: Implementation of the referenced TS is not required, but is desirable in the domain of applicability of the AS. Inclusion of the functions, features, and protocols of Recommended TSs in the developments of systems is encouraged. For example, the TELNET protocol should be implemented by all systems that intend to use remote access.
Elective: Implementation of the referenced TS is optional. The TS is only necessary in a specific environment. For example, the DECNET MIB could be seen as valuable in an environment where the DECNET protocol is used.
Common Standards
Web Standards
TCP/ IP Model & associated Internet Standards
Web standards are a type of internet standard which define aspects of the World Wide Web. They allow for the building and rendering of websites. The three key standards used by the World Wide Web are Hypertext Transfer Protocol, HTML, and URL. Respectively, they specify the content and layout of a web page, what web page identifiers mean, and the transfer of data between a browser and a web server.
Network Standards
Network standards are a type of internet standard which defines rules for data communication in networking technologies and processes. Internet standards allow for the communication procedure of a device to or from other devices.
In reference to the TCP/IP Model, common standards and protocols in each layer are as follows:
The Transport layer: TCP and SPX
Network layer: IP and IPX
Data Link layer: IEEE 802.3 for LAN and Frame Relay for WAN
Physical layer: 8P8C and V.92
Official Internet Protocol Standards
The most recent document that has been published by the IETF is titled Registration Data Access Protocol (RDAP) Query Format or RFC 9082 and is archived on the site RFC-Editor. The abstract of the document explains that it "describes uniform patterns to construct HTTP URLs that may be used to retrieve registration information from registries using "RESTful" web access patterns". RDAP allows users to access current registration data and was created to replace the WHOIS protocol.
Another internet standard protocol was published by the IETF in June 2021 containing information about JSON data structures representing registration information maintained by Regional Internet Registries (RIRs) and Domain Name Registries (DNRs). The abstract goes on to say that those data structures are used to form Registration Data Access Protocol (RDAP) query responses. This document makes RFC 7483 obsolete.
Current Internet Standard Issues
Even now, the internet is rife with Internet Standard issues. In October 2021, Facebook users, as well as users of its other related apps such as WhatsApp, Messenger, Oculus, and Instagram found themselves without service for 6 hours. The outage extended to internal communications at the companies themselves as they relied on their internal communications platform, Workplace. Outside of the company, many businesses and websites were severely affected. Many websites embed scripts for Like buttons or comment sections; they also had increased loading times because they were trying to use something that did not exist. Others rely on Facebook and WhatsApp in order to fulfill orders, communicate with customers, and generally conduct business.
The cause of the loss of service started as regular maintenance. Facebook has multiple facilities and the command was issued in order to see how available the backbone connection between them was. In the end, it accidentally deleted them. Typically, such a flawed command would not have run. However, there was a bug while checking the command. Consequently, in a domino effect of issues, Facebook's DNS servers could not find the data centers. From there, the BGP routing information stopped being advertised to the rest of the internet. It was as if Facebook and other branches were wiped from existence.
The Future of Internet Standards
The Internet has been viewed as an open playground, free for people to use and communities to monitor. However, large companies have shaped and molded it to best fit their needs. The future of internet standards will be no different. Currently, there are widely used but insecure protocols such as the Border Gateway Protocol (BGP) and Domain Name System (DNS). This reflects common practices that focus more on innovation than security. Companies have the power to improve these issues. With the Internet in the hands of the industry, users must depend on businesses to protect vulnerabilities present in these standards.
Ways to make BGP and DNS safer already exist but they are not widespread. For example, there is the existing BGP safeguard called Routing Public Key Infrastructure (RPKI). It is a database of routes that are known to be safe and have been cryptographically signed. Users and companies submit routes and check other users' routes for safety. If it were more widely adopted, more routes could be added and confirmed. However, RPKI is picking up momentum. As of December 2020, tech giant Google registered 99% of its routes with RPKI. They are making it easier for businesses to adopt BGP safeguards. DNS also has a security protocol with a low adoption rate: DNS Security Extensions (DNSSEC). Essentially, at every stage of the DNS lookup process, DNSSEC adds a signature to data to show it has not been tampered with.
Some companies have taken the initiative to secure internet protocols. It is up to the rest to make it more widespread.
See also
Standardization
Web standards
References
External links
RFC Editor |
15036 | https://en.wikipedia.org/wiki/Information%20security | Information security | Information security, sometimes shortened to InfoSec, is the practice of protecting information by mitigating information risks. It is part of information risk management. It typically involves preventing or reducing the probability of unauthorized/inappropriate access to data, or the unlawful use, disclosure, disruption, deletion, corruption, modification, inspection, recording, or devaluation of information. It also involves actions intended to reduce the adverse impacts of such incidents. Protected information may take any form, e.g. electronic or physical, tangible (e.g. paperwork) or intangible (e.g. knowledge). Information security's primary focus is the balanced protection of the confidentiality, integrity, and availability of data (also known as the CIA triad) while maintaining a focus on efficient policy implementation, all without hampering organization productivity. This is largely achieved through a structured risk management process that involves:
identifying information and related assets, plus potential threats, vulnerabilities, and impacts;
evaluating the risks;
deciding how to address or treat the risks i.e. to avoid, mitigate, share or accept them;
where risk mitigation is required, selecting or designing appropriate security controls and implementing them;
monitoring the activities, making adjustments as necessary to address any issues, changes and improvement opportunities.
To standardize this discipline, academics and professionals collaborate to offer guidance, policies, and industry standards on password, antivirus software, firewall, encryption software, legal liability, security awareness and training, and so forth. This standardization may be further driven by a wide variety of laws and regulations that affect how data is accessed, processed, stored, transferred and destroyed. However, the implementation of any standards and guidance within an entity may have limited effect if a culture of continual improvement isn't adopted.
Definition
Various definitions of information security are suggested below, summarized from different sources:
"Preservation of confidentiality, integrity and availability of information. Note: In addition, other properties, such as authenticity, accountability, non-repudiation and reliability can also be involved." (ISO/IEC 27000:2009)
"The protection of information and information systems from unauthorized access, use, disclosure, disruption, modification, or destruction in order to provide confidentiality, integrity, and availability." (CNSS, 2010)
"Ensures that only authorized users (confidentiality) have access to accurate and complete information (integrity) when required (availability)." (ISACA, 2008)
"Information Security is the process of protecting the intellectual property of an organisation." (Pipkin, 2000)
"...information security is a risk management discipline, whose job is to manage the cost of information risk to the business." (McDermott and Geer, 2001)
"A well-informed sense of assurance that information risks and controls are in balance." (Anderson, J., 2003)
"Information security is the protection of information and minimizes the risk of exposing information to unauthorized parties." (Venter and Eloff, 2003)
"Information Security is a multidisciplinary area of study and professional activity which is concerned with the development and implementation of security mechanisms of all available types (technical, organizational, human-oriented and legal) in order to keep information in all its locations (within and outside the organization's perimeter) and, consequently, information systems, where information is created, processed, stored, transmitted and destroyed, free from threats. Threats to information and information systems may be categorized and a corresponding security goal may be defined for each category of threats. A set of security goals, identified as a result of a threat analysis, should be revised periodically to ensure its adequacy and conformance with the evolving environment. The currently relevant set of security goals may include: confidentiality, integrity, availability, privacy, authenticity & trustworthiness, non-repudiation, accountability and auditability." (Cherdantseva and Hilton, 2013)
Information and information resource security using telecommunication system or devices means protecting information, information systems or books from unauthorized access, damage, theft, or destruction (Kurose and Ross, 2010).
Overview
At the core of information security is information assurance, the act of maintaining the confidentiality, integrity, and availability (CIA) of information, ensuring that information is not compromised in any way when critical issues arise. These issues include but are not limited to natural disasters, computer/server malfunction, and physical theft. While paper-based business operations are still prevalent, requiring their own set of information security practices, enterprise digital initiatives are increasingly being emphasized, with information assurance now typically being dealt with by information technology (IT) security specialists. These specialists apply information security to technology (most often some form of computer system). It is worthwhile to note that a computer does not necessarily mean a home desktop. A computer is any device with a processor and some memory. Such devices can range from non-networked standalone devices as simple as calculators, to networked mobile computing devices such as smartphones and tablet computers. IT security specialists are almost always found in any major enterprise/establishment due to the nature and value of the data within larger businesses. They are responsible for keeping all of the technology within the company secure from malicious cyber attacks that often attempt to acquire critical private information or gain control of the internal systems.
The field of information security has grown and evolved significantly in recent years. It offers many areas for specialization, including securing networks and allied infrastructure, securing applications and databases, security testing, information systems auditing, business continuity planning, electronic record discovery, and digital forensics. Information security professionals are very stable in their employment. more than 80 percent of professionals had no change in employer or employment over a period of a year, and the number of professionals is projected to continuously grow more than 11 percent annually from 2014 to 2019.
Threats
Information security threats come in many different forms. Some of the most common threats today are software attacks, theft of intellectual property, theft of identity, theft of equipment or information, sabotage, and information extortion. Most people have experienced software attacks of some sort. Viruses, worms, phishing attacks, and Trojan horses are a few common examples of software attacks. The theft of intellectual property has also been an extensive issue for many businesses in the information technology (IT) field. Identity theft is the attempt to act as someone else usually to obtain that person's personal information or to take advantage of their access to vital information through social engineering. Theft of equipment or information is becoming more prevalent today due to the fact that most devices today are mobile, are prone to theft and have also become far more desirable as the amount of data capacity increases. Sabotage usually consists of the destruction of an organization's website in an attempt to cause loss of confidence on the part of its customers. Information extortion consists of theft of a company's property or information as an attempt to receive a payment in exchange for returning the information or property back to its owner, as with ransomware. There are many ways to help protect yourself from some of these attacks but one of the most functional precautions is conduct periodical user awareness. The number one threat to any organisation are users or internal employees, they are also called insider threats.
Governments, military, corporations, financial institutions, hospitals, non-profit organisations, and private businesses amass a great deal of confidential information about their employees, customers, products, research, and financial status. Should confidential information about a business' customers or finances or new product line fall into the hands of a competitor or a black hat hacker, a business and its customers could suffer widespread, irreparable financial loss, as well as damage to the company's reputation. From a business perspective, information security must be balanced against cost; the Gordon-Loeb Model provides a mathematical economic approach for addressing this concern.
For the individual, information security has a significant effect on privacy, which is viewed very differently in various cultures.
Responses to threats
Possible responses to a security threat or risk are:
reduce/mitigate – implement safeguards and countermeasures to eliminate vulnerabilities or block threats
assign/transfer – place the cost of the threat onto another entity or organization such as purchasing insurance or outsourcing
accept – evaluate if the cost of the countermeasure outweighs the possible cost of loss due to the threat
History
Since the early days of communication, diplomats and military commanders understood that it was necessary to provide some mechanism to protect the confidentiality of correspondence and to have some means of detecting tampering. Julius Caesar is credited with the invention of the Caesar cipher c. 50 B.C., which was created in order to prevent his secret messages from being read should a message fall into the wrong hands. However, for the most part protection was achieved through the application of procedural handling controls. Sensitive information was marked up to indicate that it should be protected and transported by trusted persons, guarded and stored in a secure environment or strong box. As postal services expanded, governments created official organizations to intercept, decipher, read, and reseal letters (e.g., the U.K.'s Secret Office, founded in 1653).
In the mid-nineteenth century more complex classification systems were developed to allow governments to manage their information according to the degree of sensitivity. For example, the British Government codified this, to some extent, with the publication of the Official Secrets Act in 1889. Section 1 of the law concerned espionage and unlawful disclosures of information, while Section 2 dealt with breaches of official trust. A public interest defense was soon added to defend disclosures in the interest of the state. A similar law was passed in India in 1889, The Indian Official Secrets Act, which was associated with the British colonial era and used to crack down on newspapers that opposed the Raj's policies. A newer version was passed in 1923 that extended to all matters of confidential or secret information for governance. By the time of the First World War, multi-tier classification systems were used to communicate information to and from various fronts, which encouraged greater use of code making and breaking sections in diplomatic and military headquarters. Encoding became more sophisticated between the wars as machines were employed to scramble and unscramble information.
The establishment of computer security inaugurated the history of information security. The need for such appeared during World War II. The volume of information shared by the Allied countries during the Second World War necessitated formal alignment of classification systems and procedural controls. An arcane range of markings evolved to indicate who could handle documents (usually officers rather than enlisted troops) and where they should be stored as increasingly complex safes and storage facilities were developed. The Enigma Machine, which was employed by the Germans to encrypt the data of warfare and was successfully decrypted by Alan Turing, can be regarded as a striking example of creating and using secured information. Procedures evolved to ensure documents were destroyed properly, and it was the failure to follow these procedures which led to some of the greatest intelligence coups of the war (e.g., the capture of U-570).
Various Mainframe computers were connected online during the Cold War to complete more sophisticated tasks, in a communication process easier than mailing magnetic tapes back and forth by computer centers. As such, the Advanced Research Projects Agency (ARPA), of the United States Department of Defense, started researching the feasibility of a networked system of communication to trade information within the United States Armed Forces. In 1968, the ARPANET project was formulated by Dr. Larry Roberts, which would later evolve into what is known as the internet.
In 1973, important elements of ARPANET security were found by internet pioneer Robert Metcalfe to have many flaws such as the: "vulnerability of password structure and formats; lack of safety procedures for dial-up connections; and nonexistent user identification and authorizations", aside from the lack of controls and safeguards to keep data safe from unauthorized access. Hackers had effortless access to ARPANET, as phone numbers were known by the public. Due to these problems, coupled with the constant violation of computer security, as well as the exponential increase in the number of hosts and users of the system, "network security" was often alluded to as "network insecurity".
The end of the twentieth century and the early years of the twenty-first century saw rapid advancements in telecommunications, computing hardware and software, and data encryption. The availability of smaller, more powerful, and less expensive computing equipment made electronic data processing within the reach of small business and home users. The establishment of Transfer Control Protocol/Internetwork Protocol (TCP/IP) in the early 1980s enabled different types of computers to communicate. These computers quickly became interconnected through the internet.
The rapid growth and widespread use of electronic data processing and electronic business conducted through the internet, along with numerous occurrences of international terrorism, fueled the need for better methods of protecting the computers and the information they store, process, and transmit. The academic disciplines of computer security and information assurance emerged along with numerous professional organizations, all sharing the common goals of ensuring the security and reliability of information systems.
Basic principles
Key concepts
The CIA triad of confidentiality, integrity, and availability is at the heart of information security. (The members of the classic InfoSec triad—confidentiality, integrity, and availability—are interchangeably referred to in the literature as security attributes, properties, security goals, fundamental aspects, information criteria, critical information characteristics and basic building blocks.) However, debate continues about whether or not this CIA triad is sufficient to address rapidly changing technology and business requirements, with recommendations to consider expanding on the intersections between availability and confidentiality, as well as the relationship between security and privacy. Other principles such as "accountability" have sometimes been proposed; it has been pointed out that issues such as non-repudiation do not fit well within the three core concepts.
The triad seems to have first been mentioned in a NIST publication in 1977.
In 1992 and revised in 2002, the OECD's Guidelines for the Security of Information Systems and Networks proposed the nine generally accepted principles: awareness, responsibility, response, ethics, democracy, risk assessment, security design and implementation, security management, and reassessment. Building upon those, in 2004 the NIST's Engineering Principles for Information Technology Security proposed 33 principles. From each of these derived guidelines and practices.
In 1998, Donn Parker proposed an alternative model for the classic CIA triad that he called the six atomic elements of information. The elements are confidentiality, possession, integrity, authenticity, availability, and utility. The merits of the Parkerian Hexad are a subject of debate amongst security professionals.
In 2011, The Open Group published the information security management standard O-ISM3. This standard proposed an operational definition of the key concepts of security, with elements called "security objectives", related to access control (9), availability (3), data quality (1), compliance, and technical (4). In 2009, DoD Software Protection Initiative released the Three Tenets of Cybersecurity which are System Susceptibility, Access to the Flaw, and Capability to Exploit the Flaw. Neither of these models are widely adopted.
Confidentiality
In information security, confidentiality "is the property, that information is not made available or disclosed to unauthorized individuals, entities, or processes." While similar to "privacy," the two words are not interchangeable. Rather, confidentiality is a component of privacy that implements to protect our data from unauthorized viewers. Examples of confidentiality of electronic data being compromised include laptop theft, password theft, or sensitive emails being sent to the incorrect individuals.
Integrity
In IT security, data integrity means maintaining and assuring the accuracy and completeness of data over its entire lifecycle. This means that data cannot be modified in an unauthorized or undetected manner. This is not the same thing as referential integrity in databases, although it can be viewed as a special case of consistency as understood in the classic ACID model of transaction processing. Information security systems typically incorporate controls to ensure their own integrity, in particular protecting the kernel or core functions against both deliberate and accidental threats. Multi-purpose and multi-user computer systems aim to compartmentalize the data and processing such that no user or process can adversely impact another: the controls may not succeed however, as we see in incidents such as malware infections, hacks, data theft, fraud, and privacy breaches.
More broadly, integrity is an information security principle that involves human/social, process, and commercial integrity, as well as data integrity. As such it touches on aspects such as credibility, consistency, truthfulness, completeness, accuracy, timeliness, and assurance.
Availability
For any information system to serve its purpose, the information must be available when it is needed. This means the computing systems used to store and process the information, the security controls used to protect it, and the communication channels used to access it must be functioning correctly. High availability systems aim to remain available at all times, preventing service disruptions due to power outages, hardware failures, and system upgrades. Ensuring availability also involves preventing denial-of-service attacks, such as a flood of incoming messages to the target system, essentially forcing it to shut down.
In the realm of information security, availability can often be viewed as one of the most important parts of a successful information security program. Ultimately end-users need to be able to perform job functions; by ensuring availability an organization is able to perform to the standards that an organization's stakeholders expect. This can involve topics such as proxy configurations, outside web access, the ability to access shared drives and the ability to send emails. Executives oftentimes do not understand the technical side of information security and look at availability as an easy fix, but this often requires collaboration from many different organizational teams, such as network operations, development operations, incident response, and policy/change management. A successful information security team involves many different key roles to mesh and align for the CIA triad to be provided effectively.
Non-repudiation
In law, non-repudiation implies one's intention to fulfill their obligations to a contract. It also implies that one party of a transaction cannot deny having received a transaction, nor can the other party deny having sent a transaction.
It is important to note that while technology such as cryptographic systems can assist in non-repudiation efforts, the concept is at its core a legal concept transcending the realm of technology. It is not, for instance, sufficient to show that the message matches a digital signature signed with the sender's private key, and thus only the sender could have sent the message, and nobody else could have altered it in transit (data integrity). The alleged sender could in return demonstrate that the digital signature algorithm is vulnerable or flawed, or allege or prove that his signing key has been compromised. The fault for these violations may or may not lie with the sender, and such assertions may or may not relieve the sender of liability, but the assertion would invalidate the claim that the signature necessarily proves authenticity and integrity. As such, the sender may repudiate the message (because authenticity and integrity are pre-requisites for non-repudiation).
Risk management
Broadly speaking, risk is the likelihood that something bad will happen that causes harm to an informational asset (or the loss of the asset). A vulnerability is a weakness that could be used to endanger or cause harm to an informational asset. A threat is anything (man-made or act of nature) that has the potential to cause harm. The likelihood that a threat will use a vulnerability to cause harm creates a risk. When a threat does use a vulnerability to inflict harm, it has an impact. In the context of information security, the impact is a loss of availability, integrity, and confidentiality, and possibly other losses (lost income, loss of life, loss of real property).
The Certified Information Systems Auditor (CISA) Review Manual 2006 defines risk management as "the process of identifying vulnerabilities and threats to the information resources used by an organization in achieving business objectives, and deciding what countermeasures, if any, to take in reducing risk to an acceptable level, based on the value of the information resource to the organization."
There are two things in this definition that may need some clarification. First, the process of risk management is an ongoing, iterative process. It must be repeated indefinitely. The business environment is constantly changing and new threats and vulnerabilities emerge every day. Second, the choice of countermeasures (controls) used to manage risks must strike a balance between productivity, cost, effectiveness of the countermeasure, and the value of the informational asset being protected. Furthermore, these processes have limitations as security breaches are generally rare and emerge in a specific context which may not be easily duplicated. Thus, any process and countermeasure should itself be evaluated for vulnerabilities. It is not possible to identify all risks, nor is it possible to eliminate all risk. The remaining risk is called "residual risk."
A risk assessment is carried out by a team of people who have knowledge of specific areas of the business. Membership of the team may vary over time as different parts of the business are assessed. The assessment may use a subjective qualitative analysis based on informed opinion, or where reliable dollar figures and historical information is available, the analysis may use quantitative analysis.
Research has shown that the most vulnerable point in most information systems is the human user, operator, designer, or other human. The ISO/IEC 27002:2005 Code of practice for information security management recommends the following be examined during a risk assessment:
security policy,
organization of information security,
asset management,
human resources security,
physical and environmental security,
communications and operations management,
access control,
information systems acquisition, development, and maintenance,
information security incident management,
business continuity management
regulatory compliance.
In broad terms, the risk management process consists of:
Identification of assets and estimating their value. Include: people, buildings, hardware, software, data (electronic, print, other), supplies.
Conduct a threat assessment. Include: Acts of nature, acts of war, accidents, malicious acts originating from inside or outside the organization.
Conduct a vulnerability assessment, and for each vulnerability, calculate the probability that it will be exploited. Evaluate policies, procedures, standards, training, physical security, quality control, technical security.
Calculate the impact that each threat would have on each asset. Use qualitative analysis or quantitative analysis.
Identify, select and implement appropriate controls. Provide a proportional response. Consider productivity, cost effectiveness, and value of the asset.
Evaluate the effectiveness of the control measures. Ensure the controls provide the required cost effective protection without discernible loss of productivity.
For any given risk, management can choose to accept the risk based upon the relative low value of the asset, the relative low frequency of occurrence, and the relative low impact on the business. Or, leadership may choose to mitigate the risk by selecting and implementing appropriate control measures to reduce the risk. In some cases, the risk can be transferred to another business by buying insurance or outsourcing to another business. The reality of some risks may be disputed. In such cases leadership may choose to deny the risk.
Security controls
Selecting and implementing proper security controls will initially help an organization bring down risk to acceptable levels. Control selection should follow and should be based on the risk assessment. Controls can vary in nature, but fundamentally they are ways of protecting the confidentiality, integrity or availability of information. ISO/IEC 27001 has defined controls in different areas. Organizations can implement additional controls according to requirement of the organization. ISO/IEC 27002 offers a guideline for organizational information security standards.
Administrative
Administrative controls (also called procedural controls) consist of approved written policies, procedures, standards, and guidelines. Administrative controls form the framework for running the business and managing people. They inform people on how the business is to be run and how day-to-day operations are to be conducted. Laws and regulations created by government bodies are also a type of administrative control because they inform the business. Some industry sectors have policies, procedures, standards, and guidelines that must be followed – the Payment Card Industry Data Security Standard (PCI DSS) required by Visa and MasterCard is such an example. Other examples of administrative controls include the corporate security policy, password policy, hiring policies, and disciplinary policies.
Administrative controls form the basis for the selection and implementation of logical and physical controls. Logical and physical controls are manifestations of administrative controls, which are of paramount importance.
Logical
Logical controls (also called technical controls) use software and data to monitor and control access to information and computing systems. Passwords, network and host-based firewalls, network intrusion detection systems, access control lists, and data encryption are examples of logical controls.
An important logical control that is frequently overlooked is the principle of least privilege, which requires that an individual, program or system process not be granted any more access privileges than are necessary to perform the task. A blatant example of the failure to adhere to the principle of least privilege is logging into Windows as user Administrator to read email and surf the web. Violations of this principle can also occur when an individual collects additional access privileges over time. This happens when employees' job duties change, employees are promoted to a new position, or employees are transferred to another department. The access privileges required by their new duties are frequently added onto their already existing access privileges, which may no longer be necessary or appropriate.
Physical
Physical controls monitor and control the environment of the work place and computing facilities. They also monitor and control access to and from such facilities and include doors, locks, heating and air conditioning, smoke and fire alarms, fire suppression systems, cameras, barricades, fencing, security guards, cable locks, etc. Separating the network and workplace into functional areas are also physical controls.
An important physical control that is frequently overlooked is separation of duties, which ensures that an individual can not complete a critical task by himself. For example, an employee who submits a request for reimbursement should not also be able to authorize payment or print the check. An applications programmer should not also be the server administrator or the database administrator; these roles and responsibilities must be separated from one another.
Defense in depth
Information security must protect information throughout its lifespan, from the initial creation of the information on through to the final disposal of the information. The information must be protected while in motion and while at rest. During its lifetime, information may pass through many different information processing systems and through many different parts of information processing systems. There are many different ways the information and information systems can be threatened. To fully protect the information during its lifetime, each component of the information processing system must have its own protection mechanisms. The building up, layering on, and overlapping of security measures is called "defense in depth." In contrast to a metal chain, which is famously only as strong as its weakest link, the defense in depth strategy aims at a structure where, should one defensive measure fail, other measures will continue to provide protection.
Recall the earlier discussion about administrative controls, logical controls, and physical controls. The three types of controls can be used to form the basis upon which to build a defense in depth strategy. With this approach, defense in depth can be conceptualized as three distinct layers or planes laid one on top of the other. Additional insight into defense in depth can be gained by thinking of it as forming the layers of an onion, with data at the core of the onion, people the next outer layer of the onion, and network security, host-based security, and application security forming the outermost layers of the onion. Both perspectives are equally valid, and each provides valuable insight into the implementation of a good defense in depth strategy.
Classification
An important aspect of information security and risk management is recognizing the value of information and defining appropriate procedures and protection requirements for the information. Not all information is equal and so not all information requires the same degree of protection. This requires information to be assigned a security classification. The first step in information classification is to identify a member of senior management as the owner of the particular information to be classified. Next, develop a classification policy. The policy should describe the different classification labels, define the criteria for information to be assigned a particular label, and list the required security controls for each classification.
Some factors that influence which classification information should be assigned include how much value that information has to the organization, how old the information is and whether or not the information has become obsolete. Laws and other regulatory requirements are also important considerations when classifying information. The Information Systems Audit and Control Association (ISACA) and its Business Model for Information Security also serves as a tool for security professionals to examine security from a systems perspective, creating an environment where security can be managed holistically, allowing actual risks to be addressed.
The type of information security classification labels selected and used will depend on the nature of the organization, with examples being:
In the business sector, labels such as: Public, Sensitive, Private, Confidential.
In the government sector, labels such as: Unclassified, Unofficial, Protected, Confidential, Secret, Top Secret, and their non-English equivalents.
In cross-sectoral formations, the Traffic Light Protocol, which consists of: White, Green, Amber, and Red.
All employees in the organization, as well as business partners, must be trained on the classification schema and understand the required security controls and handling procedures for each classification. The classification of a particular information asset that has been assigned should be reviewed periodically to ensure the classification is still appropriate for the information and to ensure the security controls required by the classification are in place and are followed in their right procedures.
Access control
Access to protected information must be restricted to people who are authorized to access the information. The computer programs, and in many cases the computers that process the information, must also be authorized. This requires that mechanisms be in place to control the access to protected information. The sophistication of the access control mechanisms should be in parity with the value of the information being protected; the more sensitive or valuable the information the stronger the control mechanisms need to be. The foundation on which access control mechanisms are built start with identification and authentication.
Access control is generally considered in three steps: identification, authentication, and authorization.
Identification
Identification is an assertion of who someone is or what something is. If a person makes the statement "Hello, my name is John Doe" they are making a claim of who they are. However, their claim may or may not be true. Before John Doe can be granted access to protected information it will be necessary to verify that the person claiming to be John Doe really is John Doe. Typically the claim is in the form of a username. By entering that username you are claiming "I am the person the username belongs to".
Authentication
Authentication is the act of verifying a claim of identity. When John Doe goes into a bank to make a withdrawal, he tells the bank teller he is John Doe, a claim of identity. The bank teller asks to see a photo ID, so he hands the teller his driver's license. The bank teller checks the license to make sure it has John Doe printed on it and compares the photograph on the license against the person claiming to be John Doe. If the photo and name match the person, then the teller has authenticated that John Doe is who he claimed to be. Similarly, by entering the correct password, the user is providing evidence that he/she is the person the username belongs to.
There are three different types of information that can be used for authentication:
Something you know: things such as a PIN, a password, or your mother's maiden name
Something you have: a driver's license or a magnetic swipe card
Something you are: biometrics, including palm prints, fingerprints, voice prints, and retina (eye) scans
Strong authentication requires providing more than one type of authentication information (two-factor authentication). The username is the most common form of identification on computer systems today and the password is the most common form of authentication. Usernames and passwords have served their purpose, but they are increasingly inadequate. Usernames and passwords are slowly being replaced or supplemented with more sophisticated authentication mechanisms such as Time-based One-time Password algorithms.
Authorization
After a person, program or computer has successfully been identified and authenticated then it must be determined what informational resources they are permitted to access and what actions they will be allowed to perform (run, view, create, delete, or change). This is called authorization. Authorization to access information and other computing services begins with administrative policies and procedures. The policies prescribe what information and computing services can be accessed, by whom, and under what conditions. The access control mechanisms are then configured to enforce these policies. Different computing systems are equipped with different kinds of access control mechanisms. Some may even offer a choice of different access control mechanisms. The access control mechanism a system offers will be based upon one of three approaches to access control, or it may be derived from a combination of the three approaches.
The non-discretionary approach consolidates all access control under a centralized administration. The access to information and other resources is usually based on the individuals function (role) in the organization or the tasks the individual must perform. The discretionary approach gives the creator or owner of the information resource the ability to control access to those resources. In the mandatory access control approach, access is granted or denied basing upon the security classification assigned to the information resource.
Examples of common access control mechanisms in use today include role-based access control, available in many advanced database management systems; simple file permissions provided in the UNIX and Windows operating systems; Group Policy Objects provided in Windows network systems; and Kerberos, RADIUS, TACACS, and the simple access lists used in many firewalls and routers.
To be effective, policies and other security controls must be enforceable and upheld. Effective policies ensure that people are held accountable for their actions. The U.S. Treasury's guidelines for systems processing sensitive or proprietary information, for example, states that all failed and successful authentication and access attempts must be logged, and all access to information must leave some type of audit trail.
Also, the need-to-know principle needs to be in effect when talking about access control. This principle gives access rights to a person to perform their job functions. This principle is used in the government when dealing with difference clearances. Even though two employees in different departments have a top-secret clearance, they must have a need-to-know in order for information to be exchanged. Within the need-to-know principle, network administrators grant the employee the least amount of privilege to prevent employees from accessing more than what they are supposed to. Need-to-know helps to enforce the confidentiality-integrity-availability triad. Need-to-know directly impacts the confidential area of the triad.
Cryptography
Information security uses cryptography to transform usable information into a form that renders it unusable by anyone other than an authorized user; this process is called encryption. Information that has been encrypted (rendered unusable) can be transformed back into its original usable form by an authorized user who possesses the cryptographic key, through the process of decryption. Cryptography is used in information security to protect information from unauthorized or accidental disclosure while the information is in transit (either electronically or physically) and while information is in storage.
Cryptography provides information security with other useful applications as well, including improved authentication methods, message digests, digital signatures, non-repudiation, and encrypted network communications. Older, less secure applications such as Telnet and File Transfer Protocol (FTP) are slowly being replaced with more secure applications such as Secure Shell (SSH) that use encrypted network communications. Wireless communications can be encrypted using protocols such as WPA/WPA2 or the older (and less secure) WEP. Wired communications (such as ITU‑T G.hn) are secured using AES for encryption and X.1035 for authentication and key exchange. Software applications such as GnuPG or PGP can be used to encrypt data files and email.
Cryptography can introduce security problems when it is not implemented correctly. Cryptographic solutions need to be implemented using industry-accepted solutions that have undergone rigorous peer review by independent experts in cryptography. The length and strength of the encryption key is also an important consideration. A key that is weak or too short will produce weak encryption. The keys used for encryption and decryption must be protected with the same degree of rigor as any other confidential information. They must be protected from unauthorized disclosure and destruction, and they must be available when needed. Public key infrastructure (PKI) solutions address many of the problems that surround key management.
Process
The terms "reasonable and prudent person", "due care", and "due diligence" have been used in the fields of finance, securities, and law for many years. In recent years these terms have found their way into the fields of computing and information security. U.S. Federal Sentencing Guidelines now make it possible to hold corporate officers liable for failing to exercise due care and due diligence in the management of their information systems.
In the business world, stockholders, customers, business partners, and governments have the expectation that corporate officers will run the business in accordance with accepted business practices and in compliance with laws and other regulatory requirements. This is often described as the "reasonable and prudent person" rule. A prudent person takes due care to ensure that everything necessary is done to operate the business by sound business principles and in a legal, ethical manner. A prudent person is also diligent (mindful, attentive, ongoing) in their due care of the business.
In the field of information security, Harris
offers the following definitions of due care and due diligence:
"Due care are steps that are taken to show that a company has taken responsibility for the activities that take place within the corporation and has taken the necessary steps to help protect the company, its resources, and employees." And, [Due diligence are the] "continual activities that make sure the protection mechanisms are continually maintained and operational."
Attention should be made to two important points in these definitions. First, in due care, steps are taken to show; this means that the steps can be verified, measured, or even produce tangible artifacts. Second, in due diligence, there are continual activities; this means that people are actually doing things to monitor and maintain the protection mechanisms, and these activities are ongoing.
Organizations have a responsibility with practicing duty of care when applying information security. The Duty of Care Risk Analysis Standard (DoCRA) provides principles and practices for evaluating risk. It considers all parties that could be affected by those risks. DoCRA helps evaluate safeguards if they are appropriate in protecting others from harm while presenting a reasonable burden. With increased data breach litigation, companies must balance security controls, compliance, and its mission.
Security governance
The Software Engineering Institute at Carnegie Mellon University, in a publication titled Governing for Enterprise Security (GES) Implementation Guide, defines characteristics of effective security governance. These include:
An enterprise-wide issue
Leaders are accountable
Viewed as a business requirement
Risk-based
Roles, responsibilities, and segregation of duties defined
Addressed and enforced in policy
Adequate resources committed
Staff aware and trained
A development life cycle requirement
Planned, managed, measurable, and measured
Reviewed and audited
Incident response plans
An incident response plan (IRP) is a group of policies that dictate an organizations reaction to a cyber attack. Once an security breach has been identified the plan is initiated. It is important to note that there can be legal implications to a data breach. Knowing local and federal laws is critical. Every plan is unique to the needs of the organization, and it can involve skill sets that are not part of an IT team. For example, a lawyer may be included in the response plan to help navigate legal implications to a data breach.
As mentioned above every plan is unique but most plans will include the following:
Preparation
Good preparation includes the development of an Incident Response Team (IRT). Skills need to be used by this team would be, penetration testing, computer forensics, network security, etc. This team should also keep track of trends in cybersecurity and modern attack strategies. A training program for end users is important as well as most modern attack strategies target users on the network.
Identification
This part of the incident response plan identifies if there was a security event. When an end user reports information or an admin notices irregularities, an investigation is launched. An incident log is a crucial part of this step. All of the members of the team should be updating this log to ensure that information flows as fast as possible. If it has been identified that a security breach has occurred the next step should be activated.
Containment
In this phase, the IRT works to isolate the areas that the breach took place to limit the scope of the security event. During this phase it is important to preserve information forensically so it can be analyzed later in the process. Containment could be as simple as physically containing a server room or as complex as segmenting a network to not allow the spread of a virus.
Eradication
This is where the threat that was identified is removed from the affected systems. This could include using deleting malicious files, terminating compromised accounts, or deleting other components. Some events do not require this step, however it is important to fully understand the event before moving to this step. This will help to ensure that the threat is completely removed.
Recovery
This stage is where the systems are restored back to original operation. This stage could include the recovery of data, changing user access information, or updating firewall rules or policies to prevent a breach in the future. Without executing this step, the system could still be vulnerable to future security threats.
Lessons Learned
In this step information that has been gathered during this process is used to make future decisions on security. This step is crucial to the ensure that future events are prevented. Using this information to further train admins is critical to the process. This step can also be used to process information that is distributed from other entities who have experienced a security event.
Change management
Change management is a formal process for directing and controlling alterations to the information processing environment. This includes alterations to desktop computers, the network, servers, and software. The objectives of change management are to reduce the risks posed by changes to the information processing environment and improve the stability and reliability of the processing environment as changes are made. It is not the objective of change management to prevent or hinder necessary changes from being implemented.
Any change to the information processing environment introduces an element of risk. Even apparently simple changes can have unexpected effects. One of management's many responsibilities is the management of risk. Change management is a tool for managing the risks introduced by changes to the information processing environment. Part of the change management process ensures that changes are not implemented at inopportune times when they may disrupt critical business processes or interfere with other changes being implemented.
Not every change needs to be managed. Some kinds of changes are a part of the everyday routine of information processing and adhere to a predefined procedure, which reduces the overall level of risk to the processing environment. Creating a new user account or deploying a new desktop computer are examples of changes that do not generally require change management. However, relocating user file shares, or upgrading the Email server pose a much higher level of risk to the processing environment and are not a normal everyday activity. The critical first steps in change management are (a) defining change (and communicating that definition) and (b) defining the scope of the change system.
Change management is usually overseen by a change review board composed of representatives from key business areas, security, networking, systems administrators, database administration, application developers, desktop support, and the help desk. The tasks of the change review board can be facilitated with the use of automated work flow application. The responsibility of the change review board is to ensure the organization's documented change management procedures are followed. The change management process is as follows
Request: Anyone can request a change. The person making the change request may or may not be the same person that performs the analysis or implements the change. When a request for change is received, it may undergo a preliminary review to determine if the requested change is compatible with the organizations business model and practices, and to determine the amount of resources needed to implement the change.
Approve: Management runs the business and controls the allocation of resources therefore, management must approve requests for changes and assign a priority for every change. Management might choose to reject a change request if the change is not compatible with the business model, industry standards or best practices. Management might also choose to reject a change request if the change requires more resources than can be allocated for the change.
Plan: Planning a change involves discovering the scope and impact of the proposed change; analyzing the complexity of the change; allocation of resources and, developing, testing, and documenting both implementation and back-out plans. Need to define the criteria on which a decision to back out will be made.
Test: Every change must be tested in a safe test environment, which closely reflects the actual production environment, before the change is applied to the production environment. The backout plan must also be tested.
Schedule: Part of the change review board's responsibility is to assist in the scheduling of changes by reviewing the proposed implementation date for potential conflicts with other scheduled changes or critical business activities.
Communicate: Once a change has been scheduled it must be communicated. The communication is to give others the opportunity to remind the change review board about other changes or critical business activities that might have been overlooked when scheduling the change. The communication also serves to make the help desk and users aware that a change is about to occur. Another responsibility of the change review board is to ensure that scheduled changes have been properly communicated to those who will be affected by the change or otherwise have an interest in the change.
Implement: At the appointed date and time, the changes must be implemented. Part of the planning process was to develop an implementation plan, testing plan and, a back out plan. If the implementation of the change should fail or, the post implementation testing fails or, other "drop dead" criteria have been met, the back out plan should be implemented.
Document: All changes must be documented. The documentation includes the initial request for change, its approval, the priority assigned to it, the implementation, testing and back out plans, the results of the change review board critique, the date/time the change was implemented, who implemented it, and whether the change was implemented successfully, failed or postponed.
Post-change review: The change review board should hold a post-implementation review of changes. It is particularly important to review failed and backed out changes. The review board should try to understand the problems that were encountered, and look for areas for improvement.
Change management procedures that are simple to follow and easy to use can greatly reduce the overall risks created when changes are made to the information processing environment. Good change management procedures improve the overall quality and success of changes as they are implemented. This is accomplished through planning, peer review, documentation, and communication.
ISO/IEC 20000, The Visible OPS Handbook: Implementing ITIL in 4 Practical and Auditable Steps (Full book summary), and ITIL all provide valuable guidance on implementing an efficient and effective change management program information security.
Business continuity
Business continuity management (BCM) concerns arrangements aiming to protect an organization's critical business functions from interruption due to incidents, or at least minimize the effects. BCM is essential to any organization to keep technology and business in line with current threats to the continuation of business as usual. The BCM should be included in an organizations risk analysis plan to ensure that all of the necessary business functions have what they need to keep going in the event of any type of threat to any business function.
It encompasses:
Analysis of requirements, e.g., identifying critical business functions, dependencies and potential failure points, potential threats and hence incidents or risks of concern to the organization;
Specification, e.g., maximum tolerable outage periods; recovery point objectives (maximum acceptable periods of data loss);
Architecture and design, e.g., an appropriate combination of approaches including resilience (e.g. engineering IT systems and processes for high availability, avoiding or preventing situations that might interrupt the business), incident and emergency management (e.g., evacuating premises, calling the emergency services, triage/situation assessment and invoking recovery plans), recovery (e.g., rebuilding) and contingency management (generic capabilities to deal positively with whatever occurs using whatever resources are available);
Implementation, e.g., configuring and scheduling backups, data transfers, etc., duplicating and strengthening critical elements; contracting with service and equipment suppliers;
Testing, e.g., business continuity exercises of various types, costs and assurance levels;
Management, e.g., defining strategies, setting objectives and goals; planning and directing the work; allocating funds, people and other resources; prioritization relative to other activities; team building, leadership, control, motivation and coordination with other business functions and activities (e.g., IT, facilities, human resources, risk management, information risk and security, operations); monitoring the situation, checking and updating the arrangements when things change; maturing the approach through continuous improvement, learning and appropriate investment;
Assurance, e.g., testing against specified requirements; measuring, analyzing, and reporting key parameters; conducting additional tests, reviews and audits for greater confidence that the arrangements will go to plan if invoked.
Whereas BCM takes a broad approach to minimizing disaster-related risks by reducing both the probability and the severity of incidents, a disaster recovery plan (DRP) focuses specifically on resuming business operations as quickly as possible after a disaster. A disaster recovery plan, invoked soon after a disaster occurs, lays out the steps necessary to recover critical information and communications technology (ICT) infrastructure. Disaster recovery planning includes establishing a planning group, performing risk assessment, establishing priorities, developing recovery strategies, preparing inventories and documentation of the plan, developing verification criteria and procedure, and lastly implementing the plan.
Laws and regulations
Below is a partial listing of governmental laws and regulations in various parts of the world that have, had, or will have, a significant effect on data processing and information security. Important industry sector regulations have also been included when they have a significant impact on information security.
The UK Data Protection Act 1998 makes new provisions for the regulation of the processing of information relating to individuals, including the obtaining, holding, use or disclosure of such information. The European Union Data Protection Directive (EUDPD) requires that all E.U. members adopt national regulations to standardize the protection of data privacy for citizens throughout the E.U.
The Computer Misuse Act 1990 is an Act of the U.K. Parliament making computer crime (e.g., hacking) a criminal offense. The act has become a model upon which several other countries, including Canada and the Republic of Ireland, have drawn inspiration from when subsequently drafting their own information security laws.
The E.U.'s Data Retention Directive (annulled) required internet service providers and phone companies to keep data on every electronic message sent and phone call made for between six months and two years.
The Family Educational Rights and Privacy Act (FERPA) ( g; 34 CFR Part 99) is a U.S. Federal law that protects the privacy of student education records. The law applies to all schools that receive funds under an applicable program of the U.S. Department of Education. Generally, schools must have written permission from the parent or eligible student in order to release any information from a student's education record.
The Federal Financial Institutions Examination Council's (FFIEC) security guidelines for auditors specifies requirements for online banking security.
The Health Insurance Portability and Accountability Act (HIPAA) of 1996 requires the adoption of national standards for electronic health care transactions and national identifiers for providers, health insurance plans, and employers. Additionally, it requires health care providers, insurance providers and employers to safeguard the security and privacy of health data.
The Gramm–Leach–Bliley Act of 1999 (GLBA), also known as the Financial Services Modernization Act of 1999, protects the privacy and security of private financial information that financial institutions collect, hold, and process.
Section 404 of the Sarbanes–Oxley Act of 2002 (SOX) requires publicly traded companies to assess the effectiveness of their internal controls for financial reporting in annual reports they submit at the end of each fiscal year. Chief information officers are responsible for the security, accuracy, and the reliability of the systems that manage and report the financial data. The act also requires publicly traded companies to engage with independent auditors who must attest to, and report on, the validity of their assessments.
The Payment Card Industry Data Security Standard (PCI DSS) establishes comprehensive requirements for enhancing payment account data security. It was developed by the founding payment brands of the PCI Security Standards Council — including American Express, Discover Financial Services, JCB, MasterCard Worldwide, and Visa International — to help facilitate the broad adoption of consistent data security measures on a global basis. The PCI DSS is a multifaceted security standard that includes requirements for security management, policies, procedures, network architecture, software design, and other critical protective measures.
State security breach notification laws (California and many others) require businesses, nonprofits, and state institutions to notify consumers when unencrypted "personal information" may have been compromised, lost, or stolen.
The Personal Information Protection and Electronics Document Act (PIPEDA) of Canada supports and promotes electronic commerce by protecting personal information that is collected, used or disclosed in certain circumstances, by providing for the use of electronic means to communicate or record information or transactions and by amending the Canada Evidence Act, the Statutory Instruments Act and the Statute Revision Act.
Greece's Hellenic Authority for Communication Security and Privacy (ADAE) (Law 165/2011) establishes and describes the minimum information security controls that should be deployed by every company which provides electronic communication networks and/or services in Greece in order to protect customers' confidentiality. These include both managerial and technical controls (e.g., log records should be stored for two years).
Greece's Hellenic Authority for Communication Security and Privacy (ADAE) (Law 205/2013) concentrates around the protection of the integrity and availability of the services and data offered by Greek telecommunication companies. The law forces these and other related companies to build, deploy, and test appropriate business continuity plans and redundant infrastructures.
Culture
Describing more than simply how security aware employees are, information security culture is the ideas, customs, and social behaviors of an organization that impact information security in both positive and negative ways. Cultural concepts can help different segments of the organization work effectively or work against effectiveness towards information security within an organization. The way employees think and feel about security and the actions they take can have a big impact on information security in organizations. Roer & Petric (2017) identify seven core dimensions of information security culture in organizations:
Attitudes: Employees’ feelings and emotions about the various activities that pertain to the organizational security of information.
Behaviors: Actual or intended activities and risk-taking actions of employees that have direct or indirect impact on information security.
Cognition: Employees' awareness, verifiable knowledge, and beliefs regarding practices, activities, and self-efficacy relation that are related to information security.
Communication: Ways employees communicate with each other, sense of belonging, support for security issues, and incident reporting.
Compliance: Adherence to organizational security policies, awareness of the existence of such policies and the ability to recall the substance of such policies.
Norms: Perceptions of security-related organizational conduct and practices that are informally deemed either normal or deviant by employees and their peers, e.g. hidden expectations regarding security behaviors and unwritten rules regarding uses of information-communication technologies.
Responsibilities: Employees' understanding of the roles and responsibilities they have as a critical factor in sustaining or endangering the security of information, and thereby the organization.
Andersson and Reimers (2014) found that employees often do not see themselves as part of the organization Information Security "effort" and often take actions that ignore organizational information security best interests. Research shows information security culture needs to be improved continuously. In Information Security Culture from Analysis to Change, authors commented, "It's a never ending process, a cycle of evaluation and change or maintenance." To manage the information security culture, five steps should be taken: pre-evaluation, strategic planning, operative planning, implementation, and post-evaluation.
Pre-Evaluation: to identify the awareness of information security within employees and to analyze current security policy
Strategic Planning: to come up a better awareness-program, we need to set clear targets. Clustering people is helpful to achieve it
Operative Planning: create a good security culture based on internal communication, management buy-in, security awareness, and training programs
Implementation: should feature commitment of management, communication with organizational members, courses for all organizational members, and commitment of the employees
Post-evaluation: to better gauge the effectiveness of the prior steps and build on continuous improvement
Sources of standards
The International Organization for Standardization (ISO) is a consortium of national standards institutes from 157 countries, coordinated through a secretariat in Geneva, Switzerland. ISO is the world's largest developer of standards. ISO 15443: "Information technology – Security techniques – A framework for IT security assurance", ISO/IEC 27002: "Information technology – Security techniques – Code of practice for information security management", ISO-20000: "Information technology – Service management", and ISO/IEC 27001: "Information technology – Security techniques – Information security management systems – Requirements" are of particular interest to information security professionals.
The US National Institute of Standards and Technology (NIST) is a non-regulatory federal agency within the U.S. Department of Commerce. The NIST Computer Security Division
develops standards, metrics, tests, and validation programs as well as publishes standards and guidelines to increase secure IT planning, implementation, management, and operation. NIST is also the custodian of the U.S. Federal Information Processing Standard publications (FIPS).
The Internet Society is a professional membership society with more than 100 organizations and over 20,000 individual members in over 180 countries. It provides leadership in addressing issues that confront the future of the internet, and it is the organizational home for the groups responsible for internet infrastructure standards, including the Internet Engineering Task Force (IETF) and the Internet Architecture Board (IAB). The ISOC hosts the Requests for Comments (RFCs) which includes the Official Internet Protocol Standards and the RFC-2196 Site Security Handbook.
The Information Security Forum (ISF) is a global nonprofit organization of several hundred leading organizations in financial services, manufacturing, telecommunications, consumer goods, government, and other areas. It undertakes research into information security practices and offers advice in its biannual Standard of Good Practice and more detailed advisories for members.
The Institute of Information Security Professionals (IISP) is an independent, non-profit body governed by its members, with the principal objective of advancing the professionalism of information security practitioners and thereby the professionalism of the industry as a whole. The institute developed the IISP Skills Framework. This framework describes the range of competencies expected of information security and information assurance professionals in the effective performance of their roles. It was developed through collaboration between both private and public sector organizations, world-renowned academics, and security leaders.
The German Federal Office for Information Security (in German Bundesamt für Sicherheit in der Informationstechnik (BSI)) BSI-Standards 100–1 to 100-4 are a set of recommendations including "methods, processes, procedures, approaches and measures relating to information security". The BSI-Standard 100-2 IT-Grundschutz Methodology describes how information security management can be implemented and operated. The standard includes a very specific guide, the IT Baseline Protection Catalogs (also known as IT-Grundschutz Catalogs). Before 2005, the catalogs were formerly known as "IT Baseline Protection Manual". The Catalogs are a collection of documents useful for detecting and combating security-relevant weak points in the IT environment (IT cluster). The collection encompasses as of September 2013 over 4,400 pages with the introduction and catalogs. The IT-Grundschutz approach is aligned with to the ISO/IEC 2700x family.
The European Telecommunications Standards Institute standardized a catalog of information security indicators, headed by the Industrial Specification Group (ISG) ISI.
See also
Backup
Capability-based security
Data breach
Data-centric security
Enterprise information security architecture
Identity-based security
Information infrastructure
Information security audit
Information security indicators
Information security management
Information security standards
Information technology
Information technology security audit
IT risk
ITIL security management
Kill chain
List of computer security certifications
Mobile security
Network Security Services
Privacy engineering
Privacy software
Privacy-enhancing technologies
Security bug
Security convergence
Security information management
Security level management
Security of Information Act
Security service (telecommunication)
Single sign-on
Verification and validation
References
Further reading
Anderson, K., "IT Security Professionals Must Evolve for Changing Market", SC Magazine, October 12, 2006.
Aceituno, V., "On Information Security Paradigms", ISSA Journal, September 2005.
Dhillon, G., Principles of Information Systems Security: text and cases, John Wiley & Sons, 2007.
Easttom, C., Computer Security Fundamentals (2nd Edition) Pearson Education, 2011.
Lambo, T., "ISO/IEC 27001: The future of infosec certification", ISSA Journal, November 2006.
Dustin, D., " Awareness of How Your Data is Being Used and What to Do About It", "CDR Blog", May 2017.
Bibliography
External links
DoD IA Policy Chart on the DoD Information Assurance Technology Analysis Center web site.
patterns & practices Security Engineering Explained
Open Security Architecture- Controls and patterns to secure IT systems
IWS – Information Security Chapter
Ross Anderson's book "Security Engineering"
Data security
Security
Crime prevention
National security
Cryptography
Information governance |
15076 | https://en.wikipedia.org/wiki/International%20Data%20Encryption%20Algorithm | International Data Encryption Algorithm | In cryptography, the International Data Encryption Algorithm (IDEA), originally called Improved Proposed Encryption Standard (IPES), is a symmetric-key block cipher designed by James Massey of ETH Zurich and Xuejia Lai and was first described in 1991. The algorithm was intended as a replacement for the Data Encryption Standard (DES). IDEA is a minor revision of an earlier cipher Proposed Encryption Standard (PES).
The cipher was designed under a research contract with the Hasler Foundation, which became part of Ascom-Tech AG. The cipher was patented in a number of countries but was freely available for non-commercial use. The name "IDEA" is also a trademark. The last patents expired in 2012, and IDEA is now patent-free and thus completely free for all uses.
IDEA was used in Pretty Good Privacy (PGP) v2.0 and was incorporated after the original cipher used in v1.0, BassOmatic, was found to be insecure. IDEA is an optional algorithm in the OpenPGP standard.
Operation
IDEA operates on 64-bit blocks using a 128-bit key and consists of a series of 8 identical transformations (a round, see the illustration) and an output transformation (the half-round). The processes for encryption and decryption are similar. IDEA derives much of its security by interleaving operations from different groups — modular addition and multiplication, and bitwise eXclusive OR (XOR) — which are algebraically "incompatible" in some sense. In more detail, these operators, which all deal with 16-bit quantities, are:
Bitwise XOR (exclusive OR) (denoted with a blue circled plus ).
Addition modulo 216 (denoted with a green boxed plus ).
Multiplication modulo 216 + 1, where the all-zero word (0x0000) in inputs is interpreted as 216, and 216 in output is interpreted as the all-zero word (0x0000) (denoted by a red circled dot ).
After the 8 rounds comes a final “half-round”, the output transformation illustrated below (the swap of the middle two values cancels out the swap at the end of the last round, so that there is no net swap):
Structure
The overall structure of IDEA follows the Lai–Massey scheme. XOR is used for both subtraction and addition. IDEA uses a key-dependent half-round function. To work with 16-bit words (meaning 4 inputs instead of 2 for the 64-bit block size), IDEA uses the Lai–Massey scheme twice in parallel, with the two parallel round functions being interwoven with each other. To ensure sufficient diffusion, two of the sub-blocks are swapped after each round.
Key schedule
Each round uses 6 16-bit sub-keys, while the half-round uses 4, a total of 52 for 8.5 rounds. The first 8 sub-keys are extracted directly from the key, with K1 from the first round being the lower 16 bits; further groups of 8 keys are created by rotating the main key left 25 bits between each group of 8. This means that it is rotated less than once per round, on average, for a total of 6 rotations.
Decryption
Decryption works like encryption, but the order of the round keys is inverted, and the subkeys for the odd rounds are inversed. For instance, the values of subkeys K1–K4 are replaced by the inverse of K49–K52 for the respective group operation, K5 and K6 of each group should be replaced by K47 and K48 for decryption.
Security
The designers analysed IDEA to measure its strength against differential cryptanalysis and concluded that it is immune under certain assumptions. No successful linear or algebraic weaknesses have been reported. , the best attack applied to all keys could break IDEA reduced to 6 rounds (the full IDEA cipher uses 8.5 rounds). Note that a "break" is any attack that requires less than 2128 operations; the 6-round attack requires 264 known plaintexts and 2126.8 operations.
Bruce Schneier thought highly of IDEA in 1996, writing: "In my opinion, it is the best and most secure block algorithm available to the public at this time." (Applied Cryptography, 2nd ed.) However, by 1999 he was no longer recommending IDEA due to the availability of faster algorithms, some progress in its cryptanalysis, and the issue of patents.
In 2011 full 8.5-round IDEA was broken using a meet-in-the-middle attack. Independently in 2012, full 8.5-round IDEA was broken using a narrow-bicliques attack, with a reduction of cryptographic strength of about 2 bits, similar to the effect of the previous bicliques attack on AES; however, this attack does not threaten the security of IDEA in practice.
Weak keys
The very simple key schedule makes IDEA subject to a class of weak keys; some keys containing a large number of 0 bits produce weak encryption. These are of little concern in practice, being sufficiently rare that they are unnecessary to avoid explicitly when generating keys randomly. A simple fix was proposed: XORing each subkey with a 16-bit constant, such as 0x0DAE.
Larger classes of weak keys were found in 2002.
This is still of negligible probability to be a concern to a randomly chosen key, and some of the problems are fixed by the constant XOR proposed earlier, but the paper is not certain if all of them are. A more comprehensive redesign of the IDEA key schedule may be desirable.
Availability
A patent application for IDEA was first filed in Switzerland (CH A 1690/90) on May 18, 1990, then an international patent application was filed under the Patent Cooperation Treaty on May 16, 1991. Patents were eventually granted in Austria, France, Germany, Italy, the Netherlands, Spain, Sweden, Switzerland, the United Kingdom, (, filed May 16, 1991, issued June 22, 1994 and expired May 16, 2011), the United States (, issued May 25, 1993 and expired January 7, 2012) and Japan (JP 3225440) (expired May 16, 2011).
MediaCrypt AG is now offering a successor to IDEA and focuses on its new cipher (official release on May 2005) IDEA NXT, which was previously called FOX.
Literature
Hüseyin Demirci, Erkan Türe, Ali Aydin Selçuk, A New Meet in the Middle Attack on The IDEA Block Cipher, 10th Annual Workshop on Selected Areas in Cryptography, 2004.
Xuejia Lai and James L. Massey, A Proposal for a New Block Encryption Standard, EUROCRYPT 1990, pp. 389–404
Xuejia Lai and James L. Massey and S. Murphy, Markov ciphers and differential cryptanalysis, Advances in Cryptology — Eurocrypt '91, Springer-Verlag (1992), pp. 17–38.
References
External links
RSA FAQ on Block Ciphers
SCAN entry for IDEA
IDEA in 448 bytes of 80x86
IDEA Applet
Java source code
Block ciphers
Broken block ciphers |
15154 | https://en.wikipedia.org/wiki/IBM%203270 | IBM 3270 | The IBM 3270 is a family of block oriented display and printer computer terminals introduced by IBM in 1971 and normally used to communicate with IBM mainframes. The 3270 was the successor to the IBM 2260 display terminal. Due to the text color on the original models, these terminals are informally known as green screen terminals. Unlike a character-oriented terminal, the 3270 minimizes the number of I/O interrupts required by transferring large blocks of data known as data streams, and uses a high speed proprietary communications interface, using coaxial cable.
IBM no longer manufactures 3270 terminals, but the IBM 3270 protocol is still commonly used via TN3270 clients, 3270 terminal emulation or web interfaces to access mainframe-based applications, which are sometimes referred to as green screen applications.
Principles
The 3270 series was designed to connect with mainframe computers, often at a remote location, using the technology then available in the early 1970s. The main goal of the system was to maximize the number of terminals that could be used on a single mainframe. To do this, the 3270 was designed to minimize the amount of data transmitted, and minimize the frequency of interrupts to the mainframe. By ensuring the CPU is not interrupted at every keystroke, a 1970s-era IBM 3033 mainframe fitted with only 16 MB of main memory was able to support up to 17,500 3270 terminals under CICS.
Most 3270 devices are clustered, with one or more displays or printers connected to a control unit (the 3275 and 3276 included an integrated control unit). Originally devices were connected to the control unit over coaxial cable; later Token Ring, twisted pair, or Ethernet connections were available. A local control unit attaches directly to the channel of a nearby mainframe. A remote control unit is connected to a communications line by a modem. Remote 3270 controllers are frequently multi-dropped, with multiple control units on a line.
IBM 3270 devices are connected to a 3299 multiplexer or to the cluster controller, e.g., 3271, 3272, 3274, 3174, using RG-62, 93 ohm, coax cables in a point to point configuration with one dedicated cable per terminal. Data is sent with a bit rate of 2.3587 Mbit/s using a slightly modified differential Manchester encoding. Cable runs of up to are supported, although IBM documents routinely stated the maximum supported coax cable length was . Originally devices were equipped with BNC connectors, which later was replaced with special so-called DPC – Dual Purpose Connectors supporting the IBM Shielded twisted pair cabling system without the need for so-called red baluns.
In a data stream, both text and control (or formatting functions) are interspersed allowing an entire screen to be painted as a single output operation. The concept of formatting in these devices allows the screen to be divided into fields (clusters of contiguous character cells) for which numerous field attributes, e.g., color, highlighting, character set, protection from modification, can be set. A field attribute occupies a physical location on the screen that also determines the beginning and end of a field. There are also character attributes associated with individual screen locations.
Using a technique known as read modified, a single transmission back to the mainframe can contain the changes from any number of formatted fields that have been modified, but without sending any unmodified fields or static data. This technique enhances the terminal throughput of the CPU, and minimizes the data transmitted. Some users familiar with character interrupt-driven terminal interfaces find this technique unusual. There is also a read buffer capability that transfers the entire content of the 3270-screen buffer including field attributes. This is mainly used for debugging purposes to preserve the application program screen contents while replacing it, temporarily, with debugging information.
Early 3270s offered three types of keyboards. The typewriter keyboard came in both a 66 key version, with no programmed function (PF) keys, and a 78 key version with twelve. Both versions had two Program Attention (PA) keys. The data entry keyboard had five PF keys and two PA keys. The operator console keyboard had twelve PF keys and two PA keys. Later 3270s had an Attention key, a Cursor Select key, a System Request key, twenty-four PF keys and three PA keys. There was also a TEST REQ key. When one of these keys is pressed, it will cause its control unit to generate an I/O interrupt to the host computer and present an Attention ID (AID) identifying which key was pressed. Application program functions such as termination, page-up, page-down, or help can be invoked by a single key press, thereby reducing the load on very busy processors.
A downside to this approach was that vi-like behavior, responding to individual keystrokes, was not possible. For the same reason, a port of Lotus 1-2-3 to mainframes with 3279 screens did not meet with success because its programmers were not able to properly adapt the spreadsheet's user interface to a screen at a time rather than character at a time device. But end-user responsiveness was arguably more predictable with 3270, something users appreciated.
Applications
Following its introduction the 3270 and compatibles were by far the most commonly used terminals on IBM System/370 and successor systems. IBM and third-party software that included an interactive component took for granted the presence of 3270 terminals and provided a set of ISPF panels and supporting programs.
Conversational Monitor System (CMS) in VM has support for the 3270 continuing to z/VM.
Time Sharing Option (TSO) in OS/360 and successors has line mode command line support and also has facilities for full screen applications, e.g., ISPF.
Device independent Display Operator Console Support (DIDOCS) in Multiple Console Support (MCS) for OS/360 and successors.
The SPF and Program Development Facility (ISPF/PDF) editors for MVS and VM/SP (ISPF/PDF was available for VM, but little used) and the XEDIT editors for VM/SP through z/VM make extensive use of 3270 features.
Customer Information Control System (CICS) has support for 3270 panels.
Various versions of Wylbur have support for 3270, including support for full-screen applications.
The modified data tag is well suited to converting formatted, structured punched card input onto the 3270 display device. With the appropriate programming, any batch program that uses formatted, structured card input can be layered onto a 3270 terminal.
IBM's OfficeVision office productivity software enjoyed great success with 3270 interaction because of its design understanding. And for many years the PROFS calendar was the most commonly displayed screen on office terminals around the world.
A version of the WordPerfect word processor ported to System/370 was designed for the 3270 architecture.
SNA
3270 devices can be a part of an SNA – System Network Architecture network or non-SNA network. If the controllers are SNA connected, they appear to SNA as PU – Physical Unit type 2.0 (PU2.1 for APPN) nodes typically with LU – Logical Unit type 1, 2, and 3 devices connected. Local, channel attached, controllers are controlled by VTAM – Virtual Telecommunications Access Method. Remote controllers are controlled by the NCP – Network Control Program in the Front End Processor i.e. 3705, 3720, 3725, 3745, and VTAM.
Third parties
One of the first groups to write and provide operating system support for the 3270 and its early predecessors was the University of Michigan, who created the Michigan Terminal System in order for the hardware to be useful outside of the manufacturer. MTS was the default OS at Michigan for many years, and was still used at Michigan well into the 1990s.
Many manufacturers, such as GTE, Hewlett Packard, Honeywell/Incoterm Div, Memorex, ITT Courier, McData, Harris, Alfaskop and Teletype/AT&T created 3270 compatible terminals, or adapted ASCII terminals such as the HP 2640 series to have a similar block-mode capability that would transmit a screen at a time, with some form validation capability. The industry distinguished between ‘System compatible controllers’ and ‘Plug compatibility controllers’, where ‘System compatibility’ meant that the 3rd party system was compatible with the 3270 data stream terminated in the unit, but not as ‘Plug compatibility’ equipment, also were compatible at the coax level thereby allowing IBM terminals to be connected to a 3rd party controller or vice versa. Modern applications are sometimes built upon legacy 3270 applications, using software utilities to capture (screen scraping) screens and transfer the data to web pages or GUI interfaces.
In the early 1990s a popular solution to link PCs with the mainframes was the Irma board, an expansion card that plugged into a PC and connected to the controller through a coaxial cable. 3270 simulators for IRMA and similar adapters typically provide file transfers between the PC and the mainframe using the same protocol as the IBM 3270 PC.
Models
The IBM 3270 display terminal subsystem consists of displays, printers and controllers.
Optional features for the 3275 and 3277 are the selector-pen, ASCII rather than EBCDIC character set, an audible alarm, and a keylock for the keyboard. A keyboard numeric lock was available and will lock the keyboard if the operator attempts to enter non-numeric data into a field defined as numeric. Later an Operator Identification Card Reader was added which could read information encoded on a magnetic stripe card.
Displays
Generally, 3277 models allow only upper-case input, except for the mixed EBCDIC/APL or text keyboards, which have lower case. Lower-case capability and dead keys were available as an RPQ (Request Price Quotation); these were added to the later 3278 & 3279 models.
A version of the IBM PC called the 3270 PC, released in October 1983, includes 3270 terminal emulation. Later, the 3270 PC/G (graphics), 3270 PC/GX (extended graphics), 3270 Personal Computer AT, 3270 PC AT/G (graphics) and 3270 PC AT/GX (extended graphics) followed.
CUT vs. DFT
There are two types of 3270 displays in respect to where the 3270 data stream terminates. For CUT (Control Unit Terminal) displays, the stream terminates in the display controller, the controller instructs the display to move the cursor, position a character, etc. EBCDIC is translated by the controller into ‘3270 Character Set’, and keyboard scan-codes from the terminal, read by the controller through a poll, is translated by the controller into EBCDIC. For DFT (Distributed Function Terminal) type displays, most of the 3270 data stream is forwarded to the display by the controller. The display interprets the 3270 protocol itself.
In addition to passing the 3270 data stream directly to the terminal, allowing for features like EAB - Extended Attributes, Graphics, etc., DFT also enabled multi sessions (up to 5 simultaneous), featured in the 3290 and 3194 multisession displays. This feature was also widely used in 2nd generation 3270 terminal emulation software.
The MLT - Multiple Logical Terminals feature of the 3174 controller also enabled multiple sessions from a CUT type terminal.
3277
3277 model 1: 40×12 terminal
3277 model 2: 80×24 terminal, the biggest success of all
3277 GA: a 3277 with an RS232C I/O, often used to drive a Tektronix 4013 or 4015 graphic screen (monochrome)
3278
3278 models 1–5: next-generation, with accented characters and dead keys in countries that needed them
model 1: 80x12
model 2: 80×24
model 2A: 80x24 (console) with 4 lines reserved
model 3: 80×32 or 80x24 (switchable)
model 4: 80×43 or 80x24 (switchable)
model 5: 132×27 or 80×24 (switchable)
3278 PS: programmable characters; able to display monochrome graphics
3279
The IBM 3279 was IBM's first color terminal. IBM initially announced four models, and later added a fifth model for use as a processor console.
Models
model 2A: 80-24 base color
model 2B: 80-24 extended color
model 2C: 80-24 base color (console) with 4 lines reserved
model 3A: 80-32 base color
model 3B: 80-32 extended color
Base colorIn base color mode the protection and intensity field attributes determine the color:
{| class="wikitable"
|+ Base color mode
|-
| Protection
| Intensity
| Color
|-
| Unprotected
| Normal
| Green
|-
| Unprotected
| Intensified
| Red
|-
| Protected
| Normal
| Blue
|-
| Protected
| Intensified
| White
|}
Extended colorIn extended color mode the color field and character attributes determine the color as one of
Neutral (White)
Red
Blue
Green
Pink
Yellow
Turquoise
The 3279 was introduced in 1979. The 3279 was widely used as an IBM mainframe terminal before PCs became commonly used for the purpose. It was part of the 3270 series, using the 3270 data stream. Terminals could be connected to a 3274 controller, either channel connected to an IBM mainframe or linked via an SDLC (Synchronous Data Link Control) link. In the Systems Network Architecture (SNA) protocol these terminals were logical unit type 2 (LU2). The basic models 2A and 3A used red, green for input fields, and blue and white for output fields. However, the models 2B and 3B supported seven colors, and when equipped with the optional Programmed Symbol Set feature had a loadable character set that could be used to show graphics.
The IBM 3279 with its graphics software support, Graphical Data Display Manager (GDDM), was designed at IBM's Hursley Development Laboratory, near Winchester, England.
3290
The 3290 Information Panel a 17", amber monochrome plasma display unit announced March 8, 1983, capable of displaying in various modes, including four independent 3278 model 2 terminals, or a single 160×62 terminal; it also supports partitioning. The 3290 supports graphics through the use of programmed symbols. A 3290 application can divide its screen area up into as many as 16 separate explicit partitions (logical screens).
The 3290 is a Distributed Function Terminal (DFT) and requires that the controller do a downstream load (DSL) of microcode from floppy or hard disk.
317x
3178: lower cost terminal (1983)
3179: low cost color terminal announced March 20, 1984.
3180
The 3180 was a monochrome display, introduced on March 20, 1984, that the user could configure for several different basic and extended display modes; all of the basic modes have a primary screen size of 24x80. Modes 2 and 2+ have a secondary size of 24x80, 3 and 3+ have a secondary size of 32x80, 4 and 4+ have a secondary size of 43x80 and 5 and 5+ have a secondary size of 27x132. An application can override the primary and alternate screen sizes for the extended mode. The 3180 also supported a single explicit partition that could be reconfigured under application control.
3191
The IBM 3191 Display Station is an economical monochrome CRT. Models A and B are 1920 characters 12-inch CRTs. Models D, E and L are 1920 or 2560 character 14-inch CRTs.
3192
Model C provides a 7-color 14 inch CRT with 80x24 or 80x32 characters
Model D provides a green monochrome 15 inch CRT with 80x24, 80x32, 80x44 or 132x27 characters
Model F provides a 7-color high-resolution 14 inch CRT with 80x24, 80x32, 80x44 or 132x27 characters
Model G provides a 7-color 14 inch CRT with 80x24 or 80x32 characters
Model L provides a green monochrome 15 inch CRT with 80x24, 80x32, 80x44 or 132x27 characters with a selector pen capability
Model W provides a black and while 15 inch CRT with 80x24, 80x32, 80x44 or 132x27 characters
3193
The IBM 3193 Display Station is a high-resolution, portrait-type, monochrome, 380mm (15 inch) CRT image display providing up to letter or A4 size document display capabilities in addition to alphanumeric data.
Compressed images can be sent to the 3193 from a scanner and decompression is performed in the 3193.
Image data compression is a technique to save transmission time and reduce storage requirements.
3194
The IBM 3194 is a Display Station that features a 1.44MB 3.5" floppy drive and IND$FILE transfer.
Model C provides a 12-inch color CRT with 80x24 or 80x32 characters
Model D provides a 15-inch monochrome CRT with 80x24, 80x31, 80x44 or 132x27 characters
Model H provides a 14-inch color CRT with 80x24, 80x31, 80x44 or 132x27 characters
Subsequent
3104: low-cost R-loop connected terminal for the IBM 8100 system
3472 Infowindow
Non-IBM Displays
Several third-party manufacturers produced 3270 displays besides IBM.
GTE
GTE manufactured the IS/7800 Video Display System, nominally compatible with IBM 3277 displays attached to a 3271 or 3272. An incompatibility with the RA buffer order broke the logon screen in VM/SE (SEPP).
Harris
Harris manufactured the 8000 Series Terminal Systems, compatible with IBM 3277 displays attached to a 3271 or 3272.
Harris later manufactured the 9100–9200 Information Processing Systems, which included
9178
9278
9279-2A
9279-3G
9280
Informer 270 376/SNA
The Informer company manufactured a special version of their model 270 terminal that was compatible with IBM 3270 and its associated coax port to connect to a 3x74.
Memorex Telex
Memorex 1377, compatible with IBM 3277Attaches to 1371 or 1372
Documentation for the following is available at
Memorex/Telex 2078
Memorex/Telex 2079
Memorex/Telex 2080
Memorex/Telex 2178
Memorex/Telex 2179
Nokia/Alfaskop
Alfaskop Display Unit 4110
Alfaskop Display Unit 4112
AT&T
AT&T introduced the Dataspeed 40 terminal/controller, compatible with the IBM 3275, in 1980.
Graphics models
IBM had two different implementations for supporting graphics. The first was implemented in the optional Programmed Symbol Sets (PSS) of the 3278, 3279 and 3287, which became a standard feature on the later 3279-S3G, a.k.a. 3279G, and was based on piecing together graphics with on-the-fly custom-defined symbols downloaded to the terminal.
The second later implementation provided All Points Addressable (APA) graphics, a.k.a. Vector Graphics, allowing more efficient graphics than the older technique. The first terminal to support APA / Vector graphics was the 3179G terminal that later was replaced by first the 3192G and later the 3472G.
Both implementations are supported by IBM GDDM - Graphical Data Display Manager first released in 1979, and by SAS with their SAS/GRAPH software.
IBM 3279G
IBM 3279-S3G, a.k.a. 3279G, terminal, announced in 1979, was IBM's graphics replacement for the 3279-3B with PSS. The terminal supported 7 colors and the graphics were made up of Programmable Symbol sets loaded to the terminal by the graphical application GDDM - Graphical Data Display Manager using Write Structured Field command.
Programmable Symbols is an addition to the normal base character set consisting of Latin characters, numbers, etc. hardwired into the terminal. The 3279G supports 6 additional sets of symbols each supporting 190 symbols, resulting in a total of 1.140 programmable symbols. 3 of the Programmable Symbols sets have 3 planes each enabling coloring (red, blue, green) the Programmable Symbols downloaded to those sets, thereby supporting a total of 7 colors.
Each ‘character’ cell consists of a 9x12 or a 9x16 dot matrix depending on the screen model. In order to program a cell with a symbol 18 bytes of data is needed making the data load quite heavy in some instances when compared to classic text screens.
If one for example wishes to draw a hyperbole on the screen, the application must first compute the required Programmable Symbols to make up hyperbole and load them to the terminal. The next step is then for the application to paint the screen by addressing the screen cell position and select the appropriate symbol in one of the Programmable Symbols sets.
The 3279G could be ordered with Attribute Select Keyboard enabling the operator to select attributes, colors and Programmable Symbols sets, making that version of the terminal quite distinctive.
IBM 3179G
The IBM 3179G announced June 18, 1985, is an IBM mainframe computer terminal providing 80×24 or 80×32 characters, 16 colors, plus graphics and is the first terminal to support the APA graphics apart from the 3270 PC/G, 3270 PC/GX, PC AT/G and PC AT/GX.
3179-G terminals combine text and graphics as separate layers on the screen. Although the text and graphics appear combined on the screen, the text layer actually sits over the graphics layer. The text layer contains the usual 3270-style cells which display characters (letters, numbers, symbols, or invisible control characters). The graphics layer is an area of 720×384 pixels. All Points Addressable or vector graphics is used to paint each pixel in one of sixteen colors. As well as being separate layers on the screen, the text and graphics layers are sent to the display in separate data streams, making them completely independent.
The application i.e. GDDM sends the vector definitions to the 3179-G, and the work of activating the pixels that represent the picture (the vector-to-raster conversion) is done in the terminal itself. The datastream is related to the number of graphics primitives (lines, arcs, and so on) in the picture. Arcs are split into short vectors, that are sent to the 3179-G to be drawn. The 3179-G does not store graphic data, and so cannot offload any manipulation function from GDDM. In particular, with user control, each new viewing operation means that the data has to be regenerated and retransmitted.
The 3179G is a distributed function terminal (DFT) and requires a downstream load (DSL) to load its microcode from the cluster controller's floppy disk or hard drive.
The G10 model is a standard 122-key typewriter keyboard, while the G20 model offers APL on the same layout. Compatible with IBM System/370, IBM 4300 series, 303x, 308x, IBM 3090, and IBM 9370.
IBM 3192G
The IBM 3192G, announced in 1987 was the successor to 3179G. It featured 16 colors, and support for printers (i.e., IBM Proprinter) for local hardcopy with graphical support, or system printer, text only, implemented as an additional LU.
IBM 3472G
The IBM 3472G announced in 1989 was the successor to 3192G and featured five concurrent sessions, one of which could be graphics. Unlike the 3192-G, it needed no expansion unit to attach a mouse or color plotter, and it needed no expansion unit to attach a mouse or color plotter and it could also attach a tablet device for digitised input and a bar code reader.
APL / APL2
Most IBM terminals, starting with the 3277, could be delivered with an APL keyboard, allowing the operator/programmer to enter APL symbolic instructions directly into the editor. In order to display APL symbols on the terminal, it had to be equipped with an APL character set in addition to the normal 3270-character set. The APL character set is addressed with a preceding Graphic Escape X'08' instruction.
With the advent of the graphic terminal 3179G, the APL character set was expandable to 138 characters, called APL2. The added characters were: Diamond, Quad Null, Iota Underbar, Epsilon Underbar, Left Tack, Right Tack, Equal Underbar, Squished Quad, Quad Slope, and Dieresis Dot. Later APL2 symbols were supported by 3191 Models D, E, L, the CUT version of 3192, and 3472.
Please note that IBM's version's of APL also is called APL2.
Display-Controller
3275 remote display with controller function (no additional displays up to one printer)
3276 remote display with controller function. IBM 3276, announced in 1981, was a combined remote controller and display terminal, offering support for up to 8 displays, the 3276 itself included. By default, the 3276 had two type A coax ports, one for its own display, and one free for an additional terminal or printer. Up to three additional adapters, each supporting two coax devices, could be installed. The 3276 could connect to a non-SNA or SNA host using BSC or SDLC with line speed of up to 9,600 bit/s. The 3276 looked very much like the 3278 terminal, and the terminal feature of the 3276 itself, was more or less identical to those of the 3278.
Printers
3284 matrix printer
3286 matrix printer
3287 printer, including a color model
3288 line printer
3268-1 R-loop connected stand-alone printer for the IBM 8100 system
4224 matrix printer
In 1984 announced IPDS – Intelligent Printer Data Stream for online printing of AFP - Advanced Function Presentation documents, using bidirectional communications between the application and the printer. IPDS support among others printing of text, fonts, images, graphics, and barcodes. The IBM 4224 is one of the IPDS capable dot matrix printers.
With the emergence of printers, including laser printers, from HP, Canon, and others, targeted the PC market, 3270 customers got an alternative to IBM 3270 printers by connecting this type of printers through printer protocol converters from manufactures like I-data, MPI Tech, Adacom, and others. The printer protocol converters basically emulate a 3287 type printer, and later extended to support IPDS.
The IBM 3482 terminal, announced in 1992, offered a printer port, which could be used for host addressable printing as well as local screen copy.
In the later versions of 3174 the Asynchronous Emulation Adapter (AEA), supporting async RS-232 character-based type terminals, was enhanced to support printers equipped with a serial interface.
Controllers
3271 remote controller
3272 local controller
3274 cluster controller (different models could be channel-attached or remote via BSC or SDLC communication lines, and had between eight and 32 co-ax ports)
3174 cluster controller
On the 3274 and 3174, IBM used the term configuration support letter, sometimes followed by a release number, to designate a list of features together with the hardware and microcode needed to support them.
By 1994 the 3174 Establishment Controller supported features such as attachment to multiple hosts via Token Ring, Ethernet, or X.25 in addition to the standard channel attach or SDLC; terminal attachment via twisted pair, Token Ring or Ethernet in addition to co-ax; and TN3270. They also support attachment of asynchronous ASCII terminals, printers, and plotters alongside 3270 devices.
3274 controller
IBM introduced the 3274 controller family in 1977, replacing the 3271–2 product line.
Where the features of the 3271–2 was hardcoded, the 3274 was controlled by its microcode that was read from the 3274's build in 8” floppy drive.
3274 models included 8, 12, 16, and 32 port remote controllers and 32-port local channel attached units. In total 16 different models were over time released to the market. The 3274-1A was an SNA physical Unit type 2.0 (PU2.0), required only a single address on the channel for all 32 devices and was not compatible with the 3272. The 3274-1B and 3274-1D were compatible with the 3272 and were referred to as local non-SNA models.
The 3274 controllers introduced a new generation of the coax protocol, named Category A, to differentiate them from the Category B coax devices, such as the 3277 terminal and the 3284 printer. The first Category A coax devices were the 3278 and the first color terminal, the IBM 3279 Color Display Station.
Enabling backward compatibility, it was possible to install coax boards, so-called ‘panels’, in groups of 4 or 8 supporting the now older Category B coax devices. A maximum of 16 Category B terminals could be supported, and only 8 if the controller were fully loaded with a maximum of 4 panels each supporting 8 Category A devices.
During its life span, the 3274 supported several features including:
Extended Data Stream
Extended Highlighting
Programmed Symbol Set (PSS)
V.24 interfaces with speed up to 14.4 kbit/s
V.35 interfaces with speed up to 56 kbit/s
X.25 network attachment
DFT – Distributed Function Terminal
DSL – Downstream load for 3290 and 3179G
9901 and 3299 multiplexer
Entry Assist
Dual Logic (the feature of having two sessions from a CUT mode display).
3174 controller
IBM introduced the 3174 Subsystem Control Unit in 1986, replacing the 3274 product line.
The 3174 was designed to enhance the 3270 product line with many new connectivity options and features. Like the 3274, it was customizable, the main difference was that it used smaller (5.25-inch) diskettes than the 3274 (8-inch diskettes), and that the larger floor models had 10 slots for adapters, some of them were per default occupied by channel adapter/serial interface, coax adapter, etc. Unlike the 3274, any local models could be configured as either local SNA or local non-SNA, including PU2.1 (APPN).
The models included: 01L, 01R, 02R, 03R, 51R, 52R, 53R, 81R and 82R.
The 01L were local channel attached, the R models remotely connected, and the x3R Token Ring (upstream) connected. The 0xL/R models were floor units supporting up to 32 coax devices through the use of internal or external multiplexers (TMA/3299). The 5xR, models were shelf units with 9 coax ports, expandable to 16, by the connection of a 3299 multiplexer. The smallest desktop units, 8xR, had 4 coax ports expandable to 8, by the connection of a 3299 multiplexer.
In the 3174 controller line IBM also slightly altered the classical BNC coax connector by changing the BNC connector to DPC – Dual Purpose Connector. The DPC female connector was a few millimeters longer and with a build-in switch that detected if a normal BNC connector were connected or a newer DPC connector was connected, thereby changing the physical layer from 93 ohm unbalanced coax, to 150 ohm balanced twisted-pair, thereby directly supporting the IBM Cabling system without the need for a so-called red balun.
Configuration Support A was the first microcode offered with the 3174. It supported all the hardware modules present at the time, almost all the microcode features found in 3274 and introduced a number of new features including: Intelligent Printer Data Stream (IPDS), Multiple Logical Terminals, Country Extended Code Page (CECP), Response Time Monitor, and Token Ring configured as host interface.
Configuration Support S, strangely following release A, introduced that a local or remote controller could act as 3270 Token-Ring DSPU Gateway, supporting up to 80 Downstream PU's.
In 1989, IBM introduced a new range of 3174 models and changed the name from 3174 Subsystem Control Unit to 3174 Establishment Controller. The main new feature was support for an additional 32 coax port in floor models.
The models included: 11L, 11R, 12R, 13R, 61R, 62R, 63R, 91R, and 92R.
The new line of controllers came with Configuration Support B release 1, increased the number of supported DSPU on the Token-Ring gateway to 250 units, and introduced at the same time ‘Group Polling’ that offloaded the mainframe/VTAM polling requirement on the channel.
Configuration Support B release 2 to 5, enabled features like: Local Format Storage (CICS Screen Buffer), Type Ahead, Null/Space Processing, ESCON channel support.
In 1990–1991, a total of 7 more models were added: 21R, 21L, 12L, 22L, 22R, 23R, and 90R. The 12L offered ESCON fibreoptic channel attachment. The models with 2xx designation were equal to the 1xx models but repacked for rackmount and offered only 4 adapter slots. The 90R was not intended as a coax controller, it was positioned as a Token Ring 3270 DSPU gateway. However, it did have one coax port for configuring the unit, which with a 3299 multiplexer could be expanded to 8.
The line of controllers came with Configuration Support C to support ISDN, APPN and Peer Communication. The ISDN feature allowed downstream devices, typically PC's, to connect to the 3174 via the ISDN network. The APPN support enabled the 3174 to be a part of an APPN network, and the Peer Communication allowed coax attached PC's with ‘Peer Communication Support’ to access resources on the Token-Ring network attached to the 3174.
The subsequent releases 2 to 6 of Configuration Support C enables support for: Split screen, Copy from session to session, Calculator function, Access to AS/400 host and 5250 keyboard emulation, Numerous APPN enhancements, TCP/IP Telnet support that allowed 3270 CUT terminals to communicate with TCP/IP servers using Telnet, and at the same time in another screen to communicate with the mainframe using native 3270. TN3270 support where the 3174 could connect to a TN3270 host/gateway, eliminating SNA, but preserving the 3270 data stream. IP forwarding allowing bridging of LAN (Token-Ring or Ethernet) connected devices downstream to the 3174 to route IP traffic onto the Frame Relay WAN interface.
In 1993, three new models were added with the announcement of Ethernet Adapter (FC 3045). The models were: 14R, 24R, and 64R.
This was also IBM's final hardware announcement of 3174.
The floor models, and the rack-mountable units, could be expanded with a range of special 3174 adapters, that by 1993 included: Channel adapter, ESCON adapter, Serial (V.24/V.35) adapter, Concurrent Communication Adapter, Coax adapter, Fiber optic “coax” adapter, Async adapter, ISDN adapter, Token-Ring adapter, Ethernet adapter, and line encryption adapter.
In 1994, IBM incorporated the functions of RPQ 8Q0935 into Configuration Support-C release 3, including the TN3270 client.
Non-IBM Controllers
GTE
The GTE IS/7800 Video Display Systems used one of two nominally IBM compatible controllers:
7801 (remote, 3271 equivalent)
7802 (local, 3277 equivalent)
Harris
The Harris 8000 Series Terminal Systems used one of four controllers:
8171 (remote, 3271 equivalent)
8172 (local, 3277 equivalent)
8181 (remote, 3271 equivalent)
8182 (local, 3277 equivalent)
9116
9210
9220
Home grown
An alternative implementation of an establishment controller exists in form of OEC (Open Establishment Controller). It's a combination of an Arduino shield with a BNC connector and a Python program that runs on a POSIX system. OEC allows to connect a 3270 display to IBM mainframes via TN3270 or to other systems via VT100. Currently only CUT but not DFT displays are supported.
Memorex
Memorex had two controllers for its 3277-compatible 1377; the 1371 for remote connection and the 1372 for local connection.
Later Memorex offered a series of controllers compatible with the IBM 3274 and 3174
2074
2076
2174
2274
Multiplexers
IBM offered a device called 3299 that acted as a multiplexer between an accordingly configured 3274 controller, with the 9901 multiplexer feature, and up to 8 displays/printers, thereby reducing the number of coax cables between the 3x74 controller and the displays/printers.
With the introduction of the 3174 controller internal or external multiplexers (3299) became mainstream as the 3174-1L controller was equipped with 4 multiplexed ports each supporting 8 devices. The internal 3174 multiplexer card was named TMA – Terminal Multiplexer adapter 9176.
A number of vendors manufactured 3270 multiplexers before and alongside IBM including Fibronics and Adacom offering multiplexers that supported TTP – Telephone Twisted Pair as an alternative to coax, and fiber-optic links between the multiplexers.
In some instances, the multiplexer worked as an “expansion” unit on smaller remote controllers including the 3174-81R / 91R, where the 3299 expanded the number of coax ports from 4 to 8, or the 3174-51R / 61R, where the 3299 expanded the number of coax ports from 8 to 16.
Manufacture
The IBM 3270 display terminal subsystem was designed and developed by IBM's Kingston, New York, laboratory (which later closed during IBM's difficult time in the mid-1990s). The printers were developed by the Endicott, New York, laboratory. As the subsystem expanded, the 3276 display-controller was developed by the Fujisawa laboratory, Japan, and later the Yamato laboratory; and the 3279 color display and 3287 color printer by the Hursley, UK, laboratory. The subsystem products were manufactured in Kingston (displays and controllers), Endicott (printers), and Greenock, Scotland, UK, (most products) and shipped to users in U.S. and worldwide. 3278 terminals continued to be manufactured in Hortolândia, near Campinas, Brazil as far as late 1980s, having its internals redesigned by a local engineering team using modern CMOS technology, while retaining its external look and feel.
Telnet 3270
Telnet 3270, or tn3270 describes both the process of sending and receiving 3270 data streams using the telnet protocol and the software that emulates a 3270 class terminal that communicates using that process. tn3270 allows a 3270 terminal emulator to communicate over a TCP/IP network instead of an SNA network. Telnet 3270 can be used for either terminal or print connections. Standard telnet clients cannot be used as a substitute for tn3270 clients, as they use fundamentally different techniques for exchanging data.
Technical Information
3270 character set
The 3270 displays are available with a variety of keyboards and character sets. The following table shows the 3275/3277/3284–3286 character set for US English EBCDIC (optional characters were available for US ASCII, and UK, French, German, and Italian EBCDIC).
On the 3275 and 3277 terminals without the a text feature, lower case characters display as uppercase. NL, EM, DUP, and FM control characters display and print as 5, 9, *, and ; characters, respectively, except by the printer when WCC or CCC bits 2 and 3 = '00'b, in which case NL and EM serve their control function and do not print.
Data stream
Data sent to the 3270 consist of commands, a Copy Control Character (CCC) or Write Control Character (WCC) if appropriate, a device address for copy, orders, character data and structured fields. Commands instruct the 3270 control unit to perform some action on a specified device, such as a read or write. Orders are sent as part of the data stream to control the format of the device buffer. Structured fields are to convey additional control functions and data to or from the terminal.
On a local non-SNA controller, the command is a CCW opcode rather than the first byte of the outbound display stream; on all other controllers, the command is the first byte of the display stream, exclusive of protocol headers.
Commands
The following table includes datastream commands and CCW opcodes for local non-SNA controllers; it does not include CCW opcodes for local SNA controllers.
Write control character
The data sent by Write or Erase/Write consists of the command code itself followed by a Write Control Character (WCC) optionally followed by a buffer containing orders or data (or both). The WCC controls the operation of the device. Bits may start printer operation and specify a print format. Other bit settings will sound the audible alarm if installed, unlock the keyboard to allow operator entry, or reset all the Modified Data Tags in the device buffer.
Orders
Orders consist of the order code byte followed by zero to three bytes of variable information.
Attributes
The 3270 has three kinds of attributes:
Field attributes
Extended attributes
Character attributes
Field attributes
The original 3277 and 3275 displays used an 8-bit field attribute byte of which five bits were used.
Bits 0 and 1 are set so that the attribute will always be a valid EBCDIC (or ASCII) character.
Bit 2 is zero to indicate that the associated field is unprotected (operator could enter data) or one for protected.
Bit 3 is zero to indicate that this field, if unprotected, could accept alphanumeric input. One indicates that only numeric input is accepted, and automatically shifts to numeric for some keyboards.
Bit 4 and 5 operate in tandem:
'00'B indicate that the field is displayed on the screen and is not selector-pen detectable.
'01'B indicates that the field is displayable and selector-pen detectable.
'10'B indicates that the field is intensified (bright), displayable, and selector-pen detectable.
'11'B indicates that the field is non-display, non-printable, and not pen detectable. This last can be used in conjunction with the modified data tag to imbed static data on the screen that will be read each time data was read from the device.
Bit 7 is the "Modified Data Tag", where '0' indicates that the associated field has not been modified by the operator and '1' indicates that it has been modified. As noted above, this bit can be set programmatically to cause the field to be treated as modified.
Later models include base color: "Base color (four colors) can be produced on color displays and color printers from current 3270 application programs by use of combinations of the field intensify and field protection attribute bits. For more information on color, refer to IBM 3270 Information System: Color and Programmed Symbols, GA33-3056."
Extended attributes
The 3278 and 3279 and later models used extended attributes to add support for seven colors, blinking, reverse video, underscoring, field outlining, field validation, and programmed symbols.
Character attributes
The 3278 and 3279 and later models allowed attributes on individual characters in a field to override the corresponding field attributes.
Buffer addressing
3270 displays and printers have a buffer containing one byte for every screen position. For example, a 3277 model 2 featured a screen size of 24 rows of 80 columns for a buffer size of 1920 bytes. Bytes are addressed from zero to the screen size minus one, in this example 1919. "There is a fixed relationship between each ... buffer storage location and its position on the display screen." Most orders start operation at the "current" buffer address, and executing an order or writing data will update this address. The buffer address can be set directly using the Set Buffer Address (SBA) order, often followed by Start Field or Start Field Extended. For a device with a 1920 character display a twelve bit address is sufficient. Later 3270s with larger screen sizes use fourteen or sixteen bits.
Addresses are encoded within orders in two bytes. For twelve bit addresses the high order two bits of each byte are set to form valid EBCDIC (or ASCII) characters. For example, address 0 is coded as X'4040', or space-space, address 1919 is coded as X'5D7F', or ''. Programmers hand-coding panels usually keep the table of addresses from the 3270 Component Description or the 3270 Reference Card handy. For fourteen and sixteen-bit address, the address uses contiguous bits in two bytes.
Example
The following data stream writes an attribute in row 24, column 1, writes the (protected) characters '> ' in row 24, columns 2 and 3, and creates an unprotected field on row 24 from columns 5-79. Because the buffer wraps around an attribute is placed on row 24, column 80 to terminate the input field. This data stream would normally be written using an Erase/Write command which would set undefined positions on the screen to '00'x. Values are given in hexadecimal.
Data Description
D3 WCC [reset device + restore (unlock) keyboard + reset MDT]
11 5C F0 SBA Row 24 Column 1
1D F0 SF/Attribute
[protected, alphanumeric, display normal intensity, not pen-detectable, MDT off]
6E 40 '> '
1D 40 SF/Attribute
[unprotected, alphanumeric, display normal intensity, not pen-detectable, MDT off]
SBA is not required here since this is being written at the current buffer position
13 IC - cursor displays at current position: Row 24, column 5
11 5D 7F SBA Row 24 Column 80
1D F0 SF/Attribute
[protected, alphanumeric, display normal intensity, not pen-detectable, MDT off]
Extended Data Stream
Most 3270 terminals newer than the 3275, 3277, 3284 and 3286 support an extended data stream (EDS) that allows many new capabilities, including:
Display buffers larger than 4096 characters
Additional field attributes, e.g., color
Character attributes within a field
Redefining display geometry
Querying terminal characteristics
Programmed Symbol Sets
All Points Addressable (APA) graphics
See also
3270 emulator
List of IBM products
IBM 5250 display terminal subsystem for IBM AS/400 and IBM System/3X family
Notes
References
3174Intro
3270ColorPSS
3270Intro
3270DS
3270CS
3274Desc
RFC1041
RFC1576
RFC2355
RFC6270
External links
Partial IBM history noting the unveiling of the 3270 display system in 1971
3270 Information Display System - 3270 Data Stream Programmer's Reference from IBM
Introduction to Telnet 3270 from Cisco
- Telnet 3270 Regime Option
- TN3270 Current Practices
- TN3270 Enhancements
3270 Data Stream Programming
rbanffy/3270font: A TTF remake of the font from the 3270
3270
3270
Block-oriented terminal
3270
Multimodal interaction
History of human–computer interaction |
16476 | https://en.wikipedia.org/wiki/John%20Ashcroft | John Ashcroft | John David Ashcroft (born May 9, 1942) is an American lawyer, lobbyist, songwriter and former politician who served as the 79th U.S. Attorney General in the George W. Bush Administration, Senator from Missouri, and Governor of Missouri. He later founded The Ashcroft Group, a Washington D.C. lobbying firm.
Ashcroft previously served as Attorney General of Missouri (1976–1985), and as the 50th Governor of Missouri (1985–1993), having been elected for two consecutive terms in succession (a historical first for a Republican candidate in the state), and he also served as a U.S. Senator from Missouri (1995–2001). He lost his bid for reelection in 2000 to Mel Carnahan, who died a few days before the election, making Ashcroft
the only Senator in U.S. history to lose to a dead person. He had early appointments in Missouri state government and was mentored by John Danforth. He has written several books about politics and ethics. Since 2011 he sits on the board of directors for the
private military company Academi (formerly Blackwater), has been a member of the Federalist Society, and is a professor at the Regent University School of Law, a conservative Christian institution affiliated with televangelist Pat Robertson.
His son, Jay Ashcroft, is also a politician, serving as Secretary of State of Missouri since January 2017.
Early life and education
Ashcroft was born in Chicago, Illinois, the son of Grace P. (née Larsen) and James Robert Ashcroft. The family later lived in Willard, Missouri, where his father was a minister in an Assemblies of God congregation in nearby Springfield, served as president of Evangel University (1958–74), and jointly as President of Central Bible College (1958–63). His mother was a homemaker, whose parents had emigrated from Norway. His paternal grandfather was an Irish immigrant.
Ashcroft graduated from Hillcrest High School in 1960. He attended Yale University, where he was a member of the St. Elmo Society, graduating in 1964. He received a Juris Doctor from the University of Chicago Law School (1967).
After law school, Ashcroft briefly taught Business Law and worked as an administrator at Southwest Missouri State University. During the Vietnam War, he was not drafted because he received six student draft deferments and one occupational deferment because of his teaching work.
Political career
Missouri State Auditor
In 1972, Ashcroft ran for a Congressional seat in southwest Missouri in the Republican primary election, narrowly losing to Gene Taylor. After the primary, Missouri Governor Kit Bond appointed Ashcroft to the office of State Auditor, which Bond had vacated when he became governor.
In 1974, Ashcroft was narrowly defeated for election to that post by Jackson County County Executive George W. Lehr. He had argued that Ashcroft, who is not an accountant, was not qualified to be the State Auditor.
Attorney General of Missouri
Missouri Attorney General John Danforth, who was then in his second term, hired Ashcroft as an Assistant State Attorney General. During his service, Ashcroft shared an office with Clarence Thomas, a future U.S. Supreme Court Supreme Court Justice. (In 2001, Thomas administered Ashcroft's oath of office as U.S. Attorney General.)
In 1976, Danforth was elected to the U.S. Senate, and Ashcroft was elected to replace him as State Attorney General. He was sworn in on December 27, 1976. In 1980, Ashcroft was re-elected with 64.5 percent of the vote, winning 96 of Missouri's 114 counties.
In 1983, Ashcroft wrote the leading amicus curiae brief in the U.S. Supreme Court Case Sony Corp. of America v. Universal City Studios, Inc., supporting the use of video cassette recorders for time shifting of television programs.
Governor of Missouri (1985–1993)
Ashcroft was elected governor in 1984 and re-elected in 1988, becoming the first (and, to date, the only) Republican in Missouri history elected to two consecutive terms.
In 1984, his opponent was the Democratic Lt. Governor Ken Rothman. The campaign was so negative on both sides that a reporter described the contest as "two alley cats [scrapping] over truth in advertising". In his campaign ads, Ashcroft showed the contrast between his rural-base and the supporters of his urban-based opponent from St. Louis. Democrats did not close ranks on primary night. The defeated candidate Mel Carnahan endorsed Rothman. In the end, Ashcroft won 57 percent of the vote and carried 106 counties—then the largest Republican gubernatorial victory in Missouri history.
In 1988, Ashcroft won by a larger margin over his Democratic opponent, Betty Cooper Hearnes, the wife of the former governor Warren Hearnes. Ashcroft received 64 percent of the vote in the general election—the largest landslide for governor in Missouri history since the U.S. Civil War.
During his second term, Ashcroft served as chairman of the National Governors Association (1991–92).
U.S. Senator from Missouri
In 1994 Ashcroft was elected to the U.S. Senate from Missouri, again succeeding John Danforth, who retired from the position. Ashcroft won 59.8% of the vote against Democratic Congressman Alan Wheat. As Senator:
He opposed the Clinton Administration's Clipper encryption restrictions, arguing in favor of the individual's right to encrypt messages and export encryption software.
In 1999, as chair of the Senate's subcommittee on patents, he helped extend patents for several drugs, most significantly the allergy medication Claritin, to prevent the marketing of less-expensive generics.
On March 30, 2000, with Senator Russ Feingold, Ashcroft convened the only Senate hearing on racial profiling. He said the practice was unconstitutional and that he supported legislation requiring police to keep statistics on their actions.
In 1998, Ashcroft briefly considered running for U.S. President; but on January 5, 1999, he decided that he would seek re-election to his Senate seat in the 2000 election and not run for president.
In the Republican primary, Ashcroft defeated Marc Perkel. In the general election, Ashcroft faced a challenge from Governor Mel Carnahan.
In the midst of a tight race, Carnahan died in an airplane crash three weeks prior to the election. Ashcroft suspended all campaigning after the plane crash. Because of Missouri state election laws and the short time to election, Carnahan's name remained on the ballot. Lieutenant Governor Roger B. Wilson became governor upon Carnahan's death. Wilson said that should Carnahan be elected, he would appoint his widow, Jean Carnahan, to serve in her husband's place; Mrs. Carnahan stated that, in accordance with her late husband's goal, she would serve in the Senate if voters elected his name. Following these developments, Ashcroft resumed campaigning.
Carnahan won the election 51% to 49%. No one had ever posthumously won election to the Senate, though voters had on at least three occasions chosen deceased candidates for the House of Representatives. Ashcroft remains the only U.S. Senator defeated for re-election by a dead person.
U.S. Attorney General
In December 2000, following his Senatorial defeat, Ashcroft was chosen for the position of U.S. attorney general by president-elect George W. Bush. He was confirmed by the Senate by a vote of 58 to 42, with most Democratic senators voting against him, citing his prior opposition to using forced busing to achieve desegregation, and their opposition to Ashcroft's opposition to abortion. At the time of his appointment he was known to be a member of the Federalist Society.
In May 2001, the FBI revealed that they had misplaced thousands of documents related to the investigation of the Oklahoma City bombing. Ashcroft granted a 30-day stay of execution for Timothy McVeigh, the man sentenced to death for the bombing.
In July 2001, Ashcroft began flying exclusively by private jet. When questioned about this decision, the Justice Department explained that this course of action had been recommended based on a “threat assessment” made by the FBI. Neither the Bureau, nor the Justice Department would identify the specific nature of the threat, who made it, or when it happened. The CIA were unaware of any specific threats against Cabinet members. Ashcroft is the only Cabinet appointee who travels on a private jet, excluding the special cases of Interior and Energy who have responsibilities which require chartered jets.
After the September 11, 2001 attacks in the United States, Ashcroft was a key administration supporter of passage of the USA Patriot Act. One of its provisions, Section 215, allows the Federal Bureau of Investigation (FBI) to apply for an order from the Foreign Intelligence Surveillance Court to require production of "any tangible thing" for an investigation. This provision was criticized by citizen and professional groups concerned about violations of privacy. Ashcroft referred to the American Library Association's opposition to Section 215 as "hysteria" in two separate speeches given in September 2003. While Attorney General, Ashcroft consistently denied that the FBI or any other law enforcement agency had used the Patriot Act to obtain library circulation records or those of retail sales. According to the sworn testimony of two FBI agents interviewed by the 9/11 Commission, Ashcroft ignored warnings of an imminent al-Qaida attack.
In January 2002, the partially nude female statue of the Spirit of Justice in the Robert F. Kennedy Department of Justice Building , where Ashcroft held press conferences, was covered with blue curtains. Department officials long insisted that the curtains were put up to improve the room's use as a television backdrop and that Ashcroft had nothing to do with it. Ashcroft's successor, Alberto Gonzales, removed the curtains in June 2005. Ashcroft also held daily prayer meetings.
In March 2004, the Justice Department under Ashcroft ruled President Bush's domestic intelligence program illegal. Shortly afterward, he was hospitalized with acute gallstone pancreatitis. White House Counsel Alberto Gonzales and Chief of Staff Andrew Card Jr. went to Ashcroft's bedside in the hospital intensive-care unit, to persuade the incapacitated Attorney General to sign a document to reauthorize the program. Acting Attorney General James Comey alerted FBI Director Robert Mueller III of this plan, and rushed to the hospital, arriving ahead of Gonzales and Card, Jr. Ashcroft, "summoning the strength to lift his head and speak", refused to sign. Attempts to reauthorize the program were ended by President Bush when Ashcroft, Comey and Mueller threatened to resign.
Following accounts of the Abu Ghraib torture and prisoner abuse scandal in Iraq, one of the Torture memos was leaked to the press in June 2004. Jack Goldsmith, then head of the Office of Legal Counsel, had already withdrawn the Yoo memos and advised agencies not to rely on them. After Goldsmith was forced to resign because of his objections, Attorney General Ashcroft issued a one paragraph opinion re-authorizing the use of torture.
Ashcroft pushed his U.S. attorneys to pursue voter fraud cases. However, the U.S. attorneys struggled to find any deliberate voter fraud schemes, only finding individuals who made mistakes on forms or misunderstood whether they were eligible to vote.
Following George W. Bush's re-election, Ashcroft resigned, which took effect on February 3, 2005, after the Senate confirmed White House Counsel Alberto Gonzales as the next attorney general. Ashcroft said in his hand-written resignation letter, dated November 2, "The objective of securing the safety of Americans from crime and terror has been achieved."
Consultant and lobbyist
In May 2005, Ashcroft laid the groundwork for a strategic consulting firm, The Ashcroft Group, LLC. He started operation in the fall of 2005 and as of March 2006 had twenty-one clients, turning down two for every one accepted. In 2005 year-end filings, Ashcroft's firm reported collecting $269,000, including $220,000 from Oracle Corporation, which won Department of Justice approval of a multibillion-dollar acquisition less than a month after hiring Ashcroft. The year-end filing represented, in some cases, only initial payments.
According to government filings, Oracle is one of five Ashcroft Group clients that seek help in selling data or software with security applications. Another client, Israel Aircraft Industries International, is competing with Seattle's Boeing Company to sell the government of South Korea a billion dollar airborne radar system.
In March 2006, Ashcroft positioned himself as an "anti-Abramoff". In an hour-long interview, Ashcroft used the word integrity scores of times. In May 2006, based on conversations with members of Congress, key aides and lobbyists, The Hill magazine listed Ashcroft as one of the top 50 "hired guns" (lobbyists) that K Street had to offer.
By August 2006, Ashcroft's firm reportedly had 30 clients, many of which made products or technology aimed at homeland security. About a third of its client list were not disclosed on grounds of confidentiality. The firm also had equity stakes in eight client companies. It reportedly received $1.4 million in lobbying fees in the past six months, a small fraction of its total earnings.
After the proposed merger of Sirius Satellite Radio Inc. and XM Satellite Radio Holdings Inc., Ashcroft offered the firm his consulting services, according to a spokesman for XM. The spokesman said XM declined Ashcroft's offer. Ashcroft was subsequently hired by the National Association of Broadcasters, which is strongly opposed to the merger.
In 2011, Ashcroft became an “independent director” on the board of Xe Services (now Academi), the controversial private military company formerly known as Blackwater (Nisour Square massacre), which faced scores of charges related to weapons trafficking, unlawful force, and corruption had named Ted Wright as CEO in May 2011. Wright hired a new governance chief to oversee ethical and legal compliance and established a new board composed of former government officials, including former White House counsel Jack Quinn and Ashcroft. In December 2011, Xe Services rebranded to Academi to convey a more "boring" image.
The firm also has a law firm under its umbrella, called the Ashcroft Law Firm. In December 2014, the law firm was hired by convicted Russian arms trafficker Viktor Bout to overturn his 2011 conviction.
In June 2017, Ashcroft was hired by the government of Qatar to carry out a compliance and regulatory review of Qatar's anti-money laundering and counter-terrorist financing framework, to help challenge accusations of supporting terrorism by its neighbors, following a regional blockade, as well as claims by U.S. President Donald Trump.
In June 2018, Ashcroft was reportedly hired by Najib Razak among other top U.S lawyers to defend him in the 1MDB scandal, back in 2016. According to the document, the firm was hired to provide legal advice and counsel to Najib regarding "improper actions by third parties to attempt to destabilise the government of Malaysia". Although it is unsure whether Najib will retain the services of Ashcroft on the issue due to the United States Department of Justice's probe into 1MDB.
Political issues
In July 2002, Ashcroft proposed the creation of Operation TIPS, a domestic program in which workers and government employees would inform law enforcement agencies about suspicious behavior they encounter while performing their duties. The program was widely criticized from the beginning, with critics deriding the program as essentially a Domestic Informant Network along the lines of the East German Stasi or the Soviet KGB, and an encroachment upon the First and Fourth amendments. The United States Postal Service refused to be a party to it. Ashcroft defended the program as a necessary component of the ongoing War on Terrorism, but the proposal was eventually abandoned.
Ashcroft proposed a draft of the Domestic Security Enhancement Act of 2003, legislation to expand the powers of the U.S. government to fight crime and terrorism, while simultaneously eliminating or curtailing judicial review of these powers for incidents related to domestic terrorism. The bill was leaked and posted to the Internet on February 7, 2003.
On May 26, 2004, Ashcroft held a news conference at which he said that intelligence from multiple sources indicated that the terrorist organization, al Qaeda, intended to attack the United States in the coming months. Critics suggested he was trying to distract attention from a drop in the approval ratings of President Bush, who was campaigning for re-election.
Groups supporting individual gun ownership praised Ashcroft's support through DOJ for the Second Amendment. He said specifically, "the Second Amendment protects an individual's right to keep and bear arms," expressing the position that the second amendment expresses a right.
In 2009 in Ashcroft v. al-Kidd, the Ninth Circuit Court of Appeals in San Francisco found that Ashcroft could be sued and held personally responsible for the wrongful detention of Abdullah al-Kidd. The American citizen was arrested at Dulles Airport in March 2003 on his way to Saudi Arabia for study. He was held for 15 days in maximum security in three states, and 13 months in supervised release, to be used as a material witness in the trial of Sami Omar Al-Hussayen. (The latter was acquitted of all charges of supporting terrorism). Al-Kidd was never charged and was not called as a witness in the Al-Hussayen case.
The panels court described the government's assertions under the USA Patriot Act (2001) as "repugnant of the Constitution". In a detailed and at times passionate opinion, Judge Milan Smith likened allegations against al-Kidd as similar to the repressive practices of the British Crown that sparked the American Revolution. He wrote that the government asserts it can detain American citizens "not because there is evidence that they have committed a crime, but merely because the government wishes to investigate them for possible wrongdoing". He called it "a painful reminder of some of the most ignominious chapters of our national history".
Abdullah Al-Kidd was held in a maximum security prison for 16 days, and in supervised release for 13 months. Al-Kidd was born Lavoni T. Kidd in 1973 in Wichita, Kansas. When he converted to Islam as a student at the University of Idaho, where he was a prominent football player, he changed his name. He asserts that Ashcroft violated his civil liberties as an American citizen, as he was treated like a terrorist and not allowed to consult an attorney. Al-Kidd's lawyers say Ashcroft, as US Attorney General, encouraged authorities after 9/11 to arrest potential suspects as material witnesses when they lacked probable cause to believe the suspects had committed a crime.
The US Supreme Court agreed on October 18, 2010 to hear the case. On May 31, 2011, the US Supreme Court unanimously overturned the lower court's decision, saying that al-Kidd could not personally sue Ashcroft, as he was protected by limited immunity as a government official. A majority of the justices held that al-Kidd could not have won his case on the merits, because Ashcroft did not violate his Fourth Amendment rights.
Ashcroft has been a proponent of the War on Drugs. In a 2001 interview on Larry King Live, Ashcroft stated his intention to increase efforts in this area. In 2003, two nationwide investigations code-named Operation Pipe Dream and Operation Headhunter, which targeted businesses selling drug paraphernalia, mostly for cannabis use, resulted in a series of indictments.
Tommy Chong, a counterculture icon, was one of those charged, for his part in financing and promoting Chong Glass/Nice Dreams, a company started by his son Paris. Of the 55 individuals charged as a result of the operations, only Chong was given a prison sentence after conviction (nine months in a federal prison, plus forfeiting $103,000 and a year of probation). The other 54 individuals were given fines and home detentions. While the DOJ denied that Chong was treated any differently from the other defendants, critics thought the government was trying to make an example of him. Chong's experience as a target of Ashcroft's sting operation is the subject of Josh Gilbert's feature-length documentary a/k/a Tommy Chong, which premiered at the 2005 Toronto International Film Festival. In a pre-sentencing brief, the Department of Justice specifically cited Chong's entertainment career as a consideration against leniency.
When Karl Rove was being questioned in 2005 by the FBI over the leak of a covert CIA agent's identity in the press (the Valerie Plame affair), Ashcroft was allegedly briefed about the investigation. The Democratic U.S. Representative John Conyers described this as a "stunning ethical breach that cries out for an immediate investigation." Conyers, the ranking Democrat on the House Judiciary Committee, asked, in a statement, for a formal investigation of the time between the start of Rove's investigation and John Ashcroft's recusal.
Since his service in government, Ashcroft has continued to oppose proposals for physician-assisted suicide, which some states have passed by referenda. When interviewed about it in 2012, when a case had reached the US Supreme Court after California voters had approved a law to permit it under regulated conditions, he said,
I certainly believe that people who are in pain should be helped and assisted in every way possible, that the drugs should be used to mitigate their pain but I believe the law of the United States of America which requires that drugs not be used except for legitimate health purposes.
In 2015, Human Rights Watch called for the investigation of Ashcroft "for conspiracy to torture as well as other crimes."
Personal life
Ashcroft is a member of the Assemblies of God church. He is married to Janet E. Ashcroft and has three children with her. His son, Jay, is the Missouri Secretary of State.
Ashcroft had long enjoyed inspirational music and singing. In the 1970s, he recorded a gospel record entitled Truth: Volume One, Edition One, with the Missouri legislator Max Bacon, a Democrat.
With fellow U.S. senators Trent Lott, Larry Craig, and Jim Jeffords, Ashcroft formed a barbershop quartet called The Singing Senators. The men performed at social events with other senators. Ashcroft performed the Star Spangled Banner before the National Hockey League all-star game in St. Louis in 1988.
Ashcroft composed a paean titled "Let the Eagle Soar," which he sang at the Gordon-Conwell Theological Seminary in February 2002. Ashcroft has written and sung a number of other songs. He has collected these on compilation tapes, including In the Spirit of Life and Liberty and Gospel (Music) According to John. In 1998, he wrote a book with author Gary Thomas titled Lessons from a Father to His Son.
Ashcroft was given an honorary doctorate before giving the commencement at Toccoa Falls College in 2018.
Books
Co-author with Jane E. Ashcroft, College Law for Business, textbook (10th edition, 1987)
On My Honor: The Beliefs that Shape My Life (1998)
Lessons From a Father to His Son (2002)
Never Again: Securing America and Restoring Justice (2006)
Representation in other media
His song, "Let the Eagle Soar", was satirically featured in Michael Moore's 2004 movie Fahrenheit 9/11 and has been frequently mocked by comedians such as David Letterman, Stephen Colbert and David Cross, to name a few.
The song was performed at Bush's 2005 inauguration by Guy Hovis, a former cast member of The Lawrence Welk Show.
"Let the Eagle Soar" is heard in the background in the 2015 film The Big Short, as an ironic juxtaposition of schmaltzy music and new-age capitalist sensibility when a phone call is placed to pastoral Boulder, Colorado, where anti-authoritarian ex-banking trader Ben Rickert (played by Brad Pitt) lives.
The song "Caped Crusader" off of Jello Biafra and the Melvins' 2004 album Never Breathe What You Can't See lifts several lines from Ashcroft and 9/11 hijacker Mohamed Atta in a satirical attack on religious fundamentalism.
References
External links
BBC News' John Ashcroft profile
CNN video of John Ashcroft singing "Let the Eagle Soar"
Excerpts from an album Ashcroft recorded in the 1970s
Ashcroft's Senate voting record
Transcript of James Comey's testimony before the Senate Judiciary Committee, May 15, 2007
|-
|-
|-
|-
|-
|-
|-
|-
1942 births
Living people
20th-century American politicians
20th-century Protestants
21st-century American politicians
21st-century Protestants
American Pentecostals
American people of Norwegian descent
American Christian writers
American legal writers
American non-fiction writers
Assemblies of God people
Christians from Missouri
George W. Bush administration cabinet members
Governors of Missouri
Lawyers from Chicago
Missouri Attorneys General
Missouri lawyers
Missouri Republicans
Musicians from Chicago
Musicians from Missouri
Politicians from Chicago
Republican Party state governors of the United States
Republican Party United States senators
State Auditors of Missouri
United States Attorneys General
United States senators from Missouri
University of Chicago Law School alumni
Yale University 1960s alumni
Federalist Society members |
16830 | https://en.wikipedia.org/wiki/Keyboard%20technology | Keyboard technology | The technology of computer keyboards includes many elements. Among the more important of these is the switch technology that they use. Computer alphanumeric keyboards typically have 80 to 110 durable switches, generally one for each key. The choice of switch technology affects key response (the positive feedback that a key has been pressed) and pre-travel (the distance needed to push the key to enter a character reliably). Virtual keyboards on touch screens have no physical switches and provide audio and haptic feedback instead. Some newer keyboard models use hybrids of various technologies to achieve greater cost savings or better ergonomics.
The modern keyboard also includes a control processor and indicator lights to provide feedback to the user (and to the central processor) about what state the keyboard is in. Plug and play technology means that its 'out of the box' layout can be notified to the system, making the keyboard immediately ready to use without need for further configuration unless the user so desires.
Types
Membrane keyboard
There are two types of membrane-based keyboards, flat-panel membrane keyboards and full-travel membrane keyboards:
Flat-panel membrane keyboards are most often found on appliances like microwave ovens or photocopiers. A common design consists of three layers. The top layer has the labels printed on its front and conductive stripes printed on the back. Under this it has a spacer layer, which holds the front and back layer apart so that they do not normally make electrical contact. The back layer has conductive stripes printed perpendicularly to those of the front layer. When placed together, the stripes form a grid. When the user pushes down at a particular position, their finger pushes the front layer down through the spacer layer to close a circuit at one of the intersections of the grid. This indicates to the computer or keyboard control processor that a particular button has been pressed.
Generally, flat-panel membrane keyboards do not produce a noticeable physical feedback. Therefore, devices using these issue a beep or flash a light when the key is pressed. They are often used in harsh environments where water- or leak-proofing is desirable. Although used in the early days of the personal computer (on the Sinclair ZX80, ZX81 and Atari 400), they have been supplanted by the more tactile dome and mechanical switch keyboards.
Full-travel membrane-based keyboards are the most common computer keyboards today. They have one-piece plastic keytop/switch plungers which press down on a membrane to actuate a contact in an electrical switch matrix.
Dome-switch keyboard
Dome-switch keyboards are a hybrid of flat-panel membrane and mechanical-switch keyboards. They bring two circuit board traces together under a rubber or silicone keypad using either metal "dome" switches or polyurethane formed domes. The metal dome switches are formed pieces of stainless steel that, when compressed, give the user a crisp, positive tactile feedback. These metal types of dome switches are very common, are usually reliable to over 5 million cycles, and can be plated in either nickel, silver or gold. The rubber dome switches, most commonly referred to as polydomes, are formed polyurethane domes where the inside bubble is coated in graphite. While polydomes are typically cheaper than metal domes, they lack the crisp snap of the metal domes, and usually have a lower life specification. Polydomes are considered very quiet, but purists tend to find them "mushy" because the collapsing dome does not provide as much positive response as metal domes. For either metal or polydomes, when a key is pressed, it collapses the dome, which connects the two circuit traces and completes the connection to enter the character. The pattern on the PC board is often gold-plated.
Both are common switch technologies used in mass market keyboards today. This type of switch technology happens to be most commonly used in handheld controllers, mobile phones, automotive, consumer electronics and medical devices. Dome-switch keyboards are also called direct-switch keyboards.
Scissor-switch keyboard
A special case of the computer keyboard dome-switch is the scissor-switch. The keys are attached to the keyboard via two plastic pieces that interlock in a "scissor"-like fashion, and snap to the keyboard and the key. It still uses rubber domes, but a special plastic 'scissors' mechanism links the keycap to a plunger that depresses the rubber dome with a much shorter travel than the typical rubber dome keyboard. Typically scissor-switch keyboards also employ 3-layer membranes as the electrical component of the switch. They also usually have a shorter total key travel distance (2 mm instead of 3.5–4 mm for standard dome-switch keyswitches). This type of keyswitch is often found on the built-in keyboards on laptops and keyboards marketed as 'low-profile'. These keyboards are generally quiet and the keys require little force to press.
Scissor-switch keyboards are typically slightly more expensive. They are harder to clean (due to the limited movement of the keys and their multiple attachment points) but also less likely to get debris in them as the gaps between the keys are often smaller (as there is no need for extra room to allow for the 'wiggle' in the key, as typically found on a membrane keyboard).
Capacitive keyboard
In this type of keyboard, pressing a key changes the capacitance of a pattern of capacitor pads. The pattern consists of two D-shaped capacitor pads for each switch, printed on a printed circuit board (PCB) and covered by a thin, insulating film of soldermask which acts as a dielectric.
Despite the sophistication of the concept, the mechanism of capacitive switching is physically simple. The movable part ends with a flat foam element about the size of an aspirin tablet, finished with aluminum foil. Opposite the switch is a PCB with the capacitor pads. When the key is pressed, the foil tightly clings to the surface of the PCB, forming a daisy chain of two capacitors between contact pads and itself separated with thin soldermask, and thus "shorting" the contact pads with an easily detectable drop of capacitive reactance between them. Usually this permits a pulse or pulse train to be sensed. Because the switch does not have an actual electrical contact, there is no debouncing necessary. The keys do not need to be fully pressed to be actuated, which enables some people to type faster. The sensor tells enough about the position of the key to allow the user to adjust the actuation point (key sensitivity). This adjustment can be done with the help of the bundled software and individually for each key, if so implemented.
The IBM Model F keyboard is mechanical-key design consisted of a buckling spring over a capacitive PCB, similarly to the later Model M keyboard that used a membrane in place of the PCB.
The Topre Corporation design for key switches uses a spring below a rubber dome. The dome provides most of the force that keeps the key from being pressed, similar to a membrane keyboard, while the spring helps with the capacitive action.
Mechanical-switch keyboard
Every key on a mechanical-switch keyboard contains a complete switch underneath. Each switch is composed of a housing, a spring, and a stem, and sometimes other parts such as a separate tactile leaf or a clickbar. Switches come in three variants: "linear" with consistent resistance, "tactile" with a non-audible bump, and "clicky" with both a bump and an audible click. Depending on the resistance of the spring, the key requires different amounts of pressure to actuate and to bottom out. The shape of the stem as well as the design of the switch housing varies the actuation distance and travel distance of the switch. The sound can be altered by the material of the plate, case, lubrication, the keycap profile, and even modifying the individual switch. These modifications, or "mods" include applying lubricant to reduce friction inside the switch itself, inserting "switch films" to reduce wobble, swapping out the spring inside to modify the resistance of the switch itself and many more. Mechanical keyboards allow for the removal and replacement of keycaps, but replacing them is more common with mechanical keyboards due to common stem shape.
Alongside the mechanical keyboard switch is the stabilizer, which supports longer keys such as the "spacebar", "enter", "backspace", and "shift" keys. Although these aren't as diverse as switches, they do come in different sizes. These different sizes are meant for keyboards that are longer in build than normal. Just like the mechanical keyboard switch, the stabilizer can be modified to alter the sound and feel of these certain keys. Lubricant is the big one, to reduce the rattle of the metal wire that makes up a stabilizer. Furthermore, implementing padding in the "housing" of the stabilizer will lessen rattle and improve acoustics.
Mechanical keyboards typically have a longer lifespan than membrane or dome-switch keyboards. Cherry MX switches, for example, have an expected lifespan of 50 million clicks per switch, while switches from Razer have a rated lifetime of 60 million clicks per switch.
A major producer of mechanical switches is Cherry, who has manufactured the MX family of switches since the 1980s. Cherry's color-coding system of categorizing switches has been imitated by other switch manufacturers.
Hot-swappable keyboard
Hot-swappable keyboards are keyboards where switches can be pulled out and replaced rather than requiring the typical solder connection. Hot-swappable keyboards can accept any switch that is in the 'MX' style. Instead of the switch being soldered to the keyboard's PCB, hot-swap sockets are instead soldered on. They are mostly used by keyboard enthusiasts that build custom keyboards, and have recently begun being adopted by larger companies on production keyboards. Hot-swap sockets typically cost anywhere from $10–25 USD to fill a complete board and can allow users to try a variety of different switches without having the tools or knowledge required to solder electronics.
Buckling-spring keyboard
Many typists prefer buckling spring keyboards. The buckling spring mechanism (expired ) atop the switch is responsible for the tactile and aural response of the keyboard. This mechanism controls a small hammer that strikes a capacitive or membrane switch.
In 1993, two years after spawning Lexmark, IBM transferred its keyboard operations to the daughter company. New Model M keyboards continued to be manufactured for IBM by Lexmark until 1996, when Unicomp was established and purchased the keyboard patents and tooling equipment to continue their production.
IBM continued to make Model M's in their Scotland factory until 1999.
Hall-effect keyboard
Hall effect keyboards use magnets and Hall effect sensors instead of switches with mechanical contacts. When a key is depressed, it moves a magnet that is detected by a solid-state sensor. Because they require no physical contact for actuation, Hall-effect keyboards are extremely reliable and can accept millions of keystrokes before failing. They are used for ultra-high reliability applications such as nuclear power plants, aircraft cockpits, and critical industrial environments. They can easily be made totally waterproof, and can resist large amounts of dust and contaminants. Because a magnet and sensor are required for each key, as well as custom control electronics, they are expensive to manufacture.
Laser projection keyboard
A laser projection device approximately the size of a computer mouse projects the outline of keyboard keys onto a flat surface, such as a table or desk. This type of keyboard is portable enough to be easily used with PDAs and cellphones, and many models have retractable cords and wireless capabilities. However, sudden or accidental disruption of the laser will register unwanted keystrokes. Also, if the laser malfunctions, the whole unit becomes useless, unlike conventional keyboards which can be used even if a variety of parts (such as the keycaps) are removed. This type of keyboard can be frustrating to use since it is susceptible to errors, even in the course of normal typing, and its complete lack of tactile feedback makes it even less user-friendly than the lowest quality membrane keyboards.
Roll-up keyboard
Keyboards made of flexible silicone or polyurethane materials can roll up in a bundle. Tightly folding the keyboard may damage the internal membrane circuits. When they are completely sealed in rubber, they are water resistant. Like membrane keyboards, they are reported to be very hard to get used to, as there is little tactile feedback, and silicone will tend to attract dirt, dust, and hair.
Optical keyboard technology
Also known as photo-optical keyboard, light responsive keyboard, photo-electric keyboard, and optical key actuation detection technology.
Optical keyboard technology was introduced in 1962 by Harley E. Kelchner for use in a typewriter machine with the purpose of reducing the noise generating by actuating the typewriter keys.
An optical keyboard technology utilizes light-emitting devices and photo sensors to optically detect actuated keys. Most commonly the emitters and sensors are located at the perimeter, mounted on a small PCB. The light is directed from side to side of the keyboard interior, and it can only be blocked by the actuated keys. Most optical keyboards require at least two beams (most commonly a vertical beam and a horizontal beam) to determine the actuated key. Some optical keyboards use a special key structure that blocks the light in a certain pattern, allowing only one beam per row of keys (most commonly a horizontal beam).
The mechanism of the optical keyboard is very simple – a light beam is sent from the emitter to the receiving sensor, and the actuated key blocks, reflects, refracts or otherwise interacts with the beam, resulting in an identified key.
Some earlier optical keyboards were limited in their structure and required special casing to block external light, no multi-key functionality was supported and the design was very limited to a thick rectangular case.
The advantages of optical keyboard technology are that it offers a real waterproof keyboard, resilient to dust and liquids; and it uses about 20% PCB volume, compared with membrane or dome switch keyboards, significantly reducing electronic waste.
Additional advantages of optical keyboard technology over other keyboard technologies such as Hall effect, laser, roll-up, and transparent keyboards lie in cost (Hall effect keyboard) and feel – optical keyboard technology does not require different key mechanisms, and the tactile feel of typing has remained the same for over 60 years.
The specialist DataHand keyboard uses optical technology to sense keypresses with a single light beam and sensor per key. The keys are held in their rest position by magnets; when the magnetic force is overcome to press a key, the optical path is unblocked and the keypress is registered.
Debouncing
When a key is pressed, it oscillates (bounces) against its contacts several times before settling. When released, it oscillates again until it comes to rest. Although it happens on a scale too small to be visible to the naked eye, it can be enough to register multiple keystrokes.
To resolve this, the processor in a keyboard debounces the keystrokes, by averaging the signal over time to produce one "confirmed" keystroke that (usually) corresponds to a single press or release. Early membrane keyboards had limited typing speed because they had to do significant debouncing. This was a noticeable problem on the ZX81.
Keycaps
Keycaps are used on full-travel keyboards. While modern keycaps are typically surface-printed, they can also be double-shot molded, laser printed, sublimation printed, engraved, or they can be made of transparent material with printed paper inserts.
There are also keycaps which are thin shells that are placed over key bases. These were used on IBM PC keyboards.
Other parts
The modern PC keyboard also includes a control processor and indicator lights to provide feedback to the user about what state the keyboard is in. Depending on the sophistication of the controller's programming, the keyboard may also offer other special features. The processor is usually a single chip 8048 microcontroller variant. The keyboard switch matrix is wired to its inputs and it processes the incoming keystrokes and sends the results down a serial cable (the keyboard cord) to a receiver in the main computer box. It also controls the illumination of the "caps lock", "num lock" and "scroll lock" lights.
A common test for whether the computer has crashed is pressing the "caps lock" key. The keyboard sends the key code to the keyboard driver running in the main computer; if the main computer is operating, it commands the light to turn on. All the other indicator lights work in a similar way. The keyboard driver also tracks the shift, alt and control state of the keyboard.
Keyboard switch matrix
The keyboard switch matrix is often drawn with horizontal wires and vertical wires in a grid which is called a matrix circuit. It has a switch at some or all intersections, much like a multiplexed display. Almost all keyboards have only the switch at each intersection, which causes "ghost keys" and "key jamming" when multiple keys are pressed (rollover). Certain, often more expensive, keyboards have a diode between each intersection, allowing the keyboard microcontroller to accurately sense any number of simultaneous keys being pressed, without generating erroneous ghost keys.
Alternative text-entering methods
Optical character recognition (OCR) is preferable to rekeying for converting existing text that is already written down but not in machine-readable format (for example, a Linotype-composed book from the 1940s). In other words, to convert the text from an image to editable text (that is, a string of character codes), a person could re-type it, or a computer could look at the image and deduce what each character is. OCR technology has already reached an impressive state (for example, Google Book Search) and promises more for the future.
Speech recognition converts speech into machine-readable text (that is, a string of character codes). This technology has also reached an advanced state and is implemented in various software products. For certain uses (e.g., transcription of medical or legal dictation; journalism; writing essays or novels) speech recognition is starting to replace the keyboard. However, the lack of privacy when issuing voice commands and dictation makes this kind of input unsuitable for many environments.
Pointing devices can be used to enter text or characters in contexts where using a physical keyboard would be inappropriate or impossible. These accessories typically present characters on a display, in a layout that provides fast access to the more frequently used characters or character combinations. Popular examples of this kind of input are Graffiti, Dasher and on-screen virtual keyboards.
Other issues
Keystroke logging
Unencrypted Bluetooth keyboards are known to be vulnerable to signal theft for keylogging by other Bluetooth devices in range. Microsoft wireless keyboards 2011 and earlier are documented to have this vulnerability.
Keystroke logging (often called keylogging) is a method of capturing and recording user keystrokes. While it can be used legally to measure employee activity, or by law enforcement agencies to investigate suspicious activities, it is also used by hackers for illegal or malicious acts. Hackers use keyloggers to obtain passwords or encryption keys.
Keystroke logging can be achieved by both hardware and software means. Hardware key loggers are attached to the keyboard cable or installed inside standard keyboards. Software keyloggers work on the target computer's operating system and gain unauthorized access to the hardware, hook into the keyboard with functions provided by the OS, or use remote access software to transmit recorded data out of the target computer to a remote location. Some hackers also use wireless keylogger sniffers to collect packets of data being transferred from a wireless keyboard and its receiver, and then they crack the encryption key being used to secure wireless communications between the two devices.
Anti-spyware applications are able to detect many keyloggers and remove them. Responsible vendors of monitoring software support detection by anti-spyware programs, thus preventing abuse of the software. Enabling a firewall does not stop keyloggers per se, but can possibly prevent transmission of the logged material over the net if properly configured. Network monitors (also known as reverse-firewalls) can be used to alert the user whenever an application attempts to make a network connection. This gives the user the chance to prevent the keylogger from "phoning home" with his or her typed information. Automatic form-filling programs can prevent keylogging entirely by not using the keyboard at all. Most keyloggers can be fooled by alternating between typing the login credentials and typing characters somewhere else in the focus window.
Keyboards are also known to emit electromagnetic signatures that can be detected using special spying equipment to reconstruct the keys pressed on the keyboard. Neal O'Farrell, executive director of the Identity Theft Council, revealed to InformationWeek that "More than 25 years ago, a couple of former spooks showed me how they could capture a user's ATM PIN, from a van parked across the street, simply by capturing and decoding the electromagnetic signals generated by every keystroke," O'Farrell said. "They could even capture keystrokes from computers in nearby offices, but the technology wasn't sophisticated enough to focus in on any specific computer."
Physical injury
The use of any keyboard may cause serious injury (such as carpal tunnel syndrome or other repetitive strain injuries) to the hands, wrists, arms, neck or back. The risks of injuries can be reduced by taking frequent short breaks to get up and walk around a couple of times every hour. Users should also vary tasks throughout the day, to avoid overuse of the hands and wrists. When typing on a keyboard, a person should keep the shoulders relaxed with the elbows at the side, with the keyboard and mouse positioned so that reaching is not necessary. The chair height and keyboard tray should be adjusted so that the wrists are straight, and the wrists should not be rested on sharp table edges. Wrist or palm rests should not be used while typing.
Some adaptive technology ranging from special keyboards, mouse replacements and pen tablet interfaces to speech recognition software can reduce the risk of injury. Pause software reminds the user to pause frequently. Switching to a much more ergonomic mouse, such as a vertical mouse or joystick mouse may provide relief.
By using a touchpad or a stylus pen with a graphic tablet, in place of a mouse, one can lessen the repetitive strain on the arms and hands.
See also
List of mechanical keyboards
Keyboard layout
AZERTY
QWERTY
QWERTZ
Keyboard mapping
References
External links
Computer keyboards |
16947 | https://en.wikipedia.org/wiki/Kerberos%20%28protocol%29 | Kerberos (protocol) | Kerberos () is a computer-network authentication protocol that works on the basis of tickets to allow nodes communicating over a non-secure network to prove their identity to one another in a secure manner. Its designers aimed it primarily at a client–server model, and it provides mutual authentication—both the user and the server verify each other's identity. Kerberos protocol messages are protected against eavesdropping and replay attacks.
Kerberos builds on symmetric-key cryptography and requires a trusted third party, and optionally may use public-key cryptography during certain phases of authentication. Kerberos uses UDP port 88 by default.
The protocol was named after the character Kerberos (or Cerberus) from Greek mythology, the ferocious three-headed guard dog of Hades.
History and development
Massachusetts Institute of Technology (MIT) developed Kerberos to protect network services provided by Project Athena. The protocol is based on the earlier Needham–Schroeder symmetric-key protocol. Several versions of the protocol exist; versions 1–3 occurred only internally at MIT.
Kerberos version 4 was primarily designed by Steve Miller and Clifford Neuman. Published in the late 1980s, version 4 was also targeted at Project Athena.
Neuman and John Kohl published version 5 in 1993 with the intention of overcoming existing limitations and security problems. Version 5 appeared as RFC 1510, which was then made obsolete by RFC 4120 in 2005.
Authorities in the United States classified Kerberos as "Auxiliary Military Equipment" on the US Munitions List and banned its export because it used the Data Encryption Standard (DES) encryption algorithm (with 56-bit keys). A Kerberos 4 implementation developed at the Royal Institute of Technology in Sweden named KTH-KRB (rebranded to Heimdal at version 5) made the system available outside the US before the US changed its cryptography export regulations (around 2000). The Swedish implementation was based on a limited version called eBones. eBones was based on the exported MIT Bones release (stripped of both the encryption functions and the calls to them) based on version Kerberos 4 patch-level 9.
In 2005, the Internet Engineering Task Force (IETF) Kerberos working group updated specifications. Updates included:
Encryption and Checksum Specifications (RFC 3961).
Advanced Encryption Standard (AES) Encryption for Kerberos 5 (RFC 3962).
A new edition of the Kerberos V5 specification "The Kerberos Network Authentication Service (V5)" (RFC 4120). This version obsoletes RFC 1510, clarifies aspects of the protocol and intended use in a more detailed and clearer explanation.
A new edition of the Generic Security Services Application Program Interface (GSS-API) specification "The Kerberos Version 5 Generic Security Service Application Program Interface (GSS-API) Mechanism: Version 2" (RFC 4121).
MIT makes an implementation of Kerberos freely available, under copyright permissions similar to those used for BSD. In 2007, MIT formed the Kerberos Consortium to foster continued development. Founding sponsors include vendors such as Oracle, Apple Inc., Google, Microsoft, Centrify Corporation and TeamF1 Inc., and academic institutions such as the Royal Institute of Technology in Sweden, Stanford University, MIT, and vendors such as CyberSafe offering commercially supported versions.
Microsoft Windows
Windows 2000 and later versions use Kerberos as their default authentication method. Some Microsoft additions to the Kerberos suite of protocols are documented in RFC 3244 "Microsoft Windows 2000 Kerberos Change Password and Set Password Protocols". RFC 4757 documents Microsoft's use of the RC4 cipher. While Microsoft uses and extends the Kerberos protocol, it does not use the MIT software.
Kerberos is used as the preferred authentication method: in general, joining a client to a Windows domain means enabling Kerberos as the default protocol for authentications from that client to services in the Windows domain and all domains with trust relationships to that domain.
In contrast, when either client or server or both are not joined to a domain (or not part of the same trusted domain environment), Windows will instead use NTLM for authentication between client and server.
Intranet web applications can enforce Kerberos as an authentication method for domain-joined clients by using APIs provided under SSPI.
Microsoft Windows and Windows Server include , a command-line utility that can be used to read, modify, or delete the Service Principal Names (SPN) for an Active Directory service account.
Unix and other operating systems
Many Unix-like operating systems, including FreeBSD, OpenBSD, Apple's macOS, Red Hat Enterprise Linux, Oracle's Solaris, IBM's AIX, HP-UX and others, include software for Kerberos authentication of users or services. A variety of non-Unix like operating systems such as z/OS, IBM i and OpenVMS also feature Kerberos support. Embedded implementation of the Kerberos V authentication protocol for client agents and network services running on embedded platforms is also available from companies.
Protocol
Description
The client authenticates itself to the Authentication Server (AS) which forwards the username to a key distribution center (KDC). The KDC issues a ticket-granting ticket (TGT), which is time stamped and encrypts it using the ticket-granting service's (TGS) secret key and returns the encrypted result to the user's workstation. This is done infrequently, typically at user logon; the TGT expires at some point although it may be transparently renewed by the user's session manager while they are logged in.
When the client needs to communicate with a service on another node (a "principal", in Kerberos parlance), the client sends the TGT to the TGS, which usually shares the same host as the KDC. The service must have already been registered with the TGS with a Service Principal Name (SPN). The client uses the SPN to request access to this service. After verifying that the TGT is valid and that the user is permitted to access the requested service, the TGS issues ticket and session keys to the client. The client then sends the ticket to the service server (SS) along with its service request.
The protocol is described in detail below.
User Client-based Login without Kerberos
A user enters a username and password on the client machine(s). Other credential mechanisms like pkinit (RFC 4556) allow for the use of public keys in place of a password. The client transforms the password into the key of a symmetric cipher. This either uses the built-in key scheduling, or a one-way hash, depending on the cipher-suite used.
The server receives the username and symmetric cipher and compares it with the data from database. Login was a success if the cipher matches the cipher that is stored for the user.
Client Authentication
The client sends a cleartext message of the user ID to the AS (Authentication Server) requesting services on behalf of the user. (Note: Neither the secret key nor the password is sent to the AS.)
The AS checks to see whether the client is in its database. If it is, the AS generates the secret key by hashing the password of the user found at the database (e.g., Active Directory in Windows Server) and sends back the following two messages to the client:
Message A: Client/TGS Session Key encrypted using the secret key of the client/user.
Message B: Ticket-Granting-Ticket (TGT, which includes the client ID, client network address, ticket validity period, and the Client/TGS Session Key) encrypted using the secret key of the TGS.
Once the client receives messages A and B, it attempts to decrypt message A with the secret key generated from the password entered by the user. If the user entered password does not match the password in the AS database, the client's secret key will be different and thus unable to decrypt message A. With a valid password and secret key the client decrypts message A to obtain the Client/TGS Session Key. This session key is used for further communications with the TGS. (Note: The client cannot decrypt Message B, as it is encrypted using TGS's secret key.) At this point, the client has enough information to authenticate itself to the TGS.
Client Service Authorization
When requesting services, the client sends the following messages to the TGS:
Message C: Composed of the message B (the encrypted TGT using the TGS secret key) and the ID of the requested service.
Message D: Authenticator (which is composed of the client ID and the timestamp), encrypted using the Client/TGS Session Key.
Upon receiving messages C and D, the TGS retrieves message B out of message C. It decrypts message B using the TGS secret key. This gives it the Client/TGS Session Key and the client ID (both are in the TGT). Using this Client/TGS Session Key, the TGS decrypts message D (Authenticator) and compares the client IDs from messages B and D; if they match, the server sends the following two messages to the client:
Message E: Client-to-server ticket (which includes the client ID, client network address, validity period, and Client/Server Session Key) encrypted using the service's secret key.
Message F: Client/Server Session Key encrypted with the Client/TGS Session Key.
Client Service Request
Upon receiving messages E and F from TGS, the client has enough information to authenticate itself to the Service Server (SS). The client connects to the SS and sends the following two messages:
Message E: From the previous step (the Client-to-server ticket, encrypted using service's secret key).
Message G: A new Authenticator, which includes the client ID, timestamp and is encrypted using Client/Server Session Key.
The SS decrypts the ticket (message E) using its own secret key to retrieve the Client/Server Session Key. Using the sessions key, SS decrypts the Authenticator and compares client ID from messages E and G, if they match server sends the following message to the client to confirm its true identity and willingness to serve the client:
Message H: The timestamp found in client's Authenticator (plus 1 in version 4, but not necessary in version 5), encrypted using the Client/Server Session Key.
The client decrypts the confirmation (message H) using the Client/Server Session Key and checks whether the timestamp is correct. If so, then the client can trust the server and can start issuing service requests to the server.
The server provides the requested services to the client.
Drawbacks and limitations
Kerberos has strict time requirements, which means that the clocks of the involved hosts must be synchronized within configured limits. The tickets have a time availability period, and if the host clock is not synchronized with the Kerberos server clock, the authentication will fail. The default configuration per MIT requires that clock times be no more than five minutes apart. In practice, Network Time Protocol daemons are usually used to keep the host clocks synchronized. Note that some servers (Microsoft's implementation being one of them) may return a KRB_AP_ERR_SKEW result containing the encrypted server time if both clocks have an offset greater than the configured maximum value. In that case, the client could retry by calculating the time using the provided server time to find the offset. This behavior is documented in RFC 4430.
The administration protocol is not standardized and differs between server implementations. Password changes are described in RFC 3244.
In case of symmetric cryptography adoption (Kerberos can work using symmetric or asymmetric (public-key) cryptography), since all authentications are controlled by a centralized key distribution center (KDC), compromise of this authentication infrastructure will allow an attacker to impersonate any user.
Each network service that requires a different host name will need its own set of Kerberos keys. This complicates virtual hosting and clusters.
Kerberos requires user accounts and services to have a trusted relationship to the Kerberos token server.
The required client trust makes creating staged environments (e.g., separate domains for test environment, pre-production environment and production environment) difficult: Either domain trust relationships need to be created that prevent a strict separation of environment domains, or additional user clients need to be provided for each environment.
Vulnerabilities
The Data Encryption Standard (DES) cipher can be used in combination with Kerberos, but is no longer an Internet standard because it is weak. Security vulnerabilities exist in many legacy products that implement Kerberos because they have not been updated to use newer ciphers like AES instead of DES.
In November 2014, Microsoft released a patch (MS14-068) to rectify an exploitable vulnerability in Windows implementation of the Kerberos Key Distribution Center (KDC). The vulnerability purportedly allows users to "elevate" (and abuse) their privileges, up to Domain level.
See also
Single sign-on
Identity management
SPNEGO
S/Key
Secure remote password protocol (SRP)
Generic Security Services Application Program Interface (GSS-API)
Host Identity Protocol (HIP)
List of single sign-on implementations
References
General
RFCs
The Kerberos Network Authentication Service (V5) [Obsolete]
The Kerberos Version 5 GSS-API Mechanism
Encryption and Checksum Specifications for Kerberos 5
Advanced Encryption Standard (AES) Encryption for Kerberos 5
The Kerberos Network Authentication Service (V5) [Current]
The Kerberos Version 5 Generic Security Service Application Program Interface (GSS-API) Mechanism: Version 2
Kerberos Cryptosystem Negotiation Extension
Public Key Cryptography for Initial Authentication in Kerberos (PKINIT)
Online Certificate Status Protocol (OCSP) Support for Public Key Cryptography for Initial Authentication in Kerberos (PKINIT)
The RC4-HMAC Kerberos Encryption Types Used by Microsoft Windows [Obsolete]
Extended Kerberos Version 5 Key Distribution Center (KDC) Exchanges over TCP
Elliptic Curve Cryptography (ECC) Support for Public Key Cryptography for Initial Authentication in Kerberos (PKINIT)
Problem Statement on the Cross-Realm Operation of Kerberos
Generic Security Service Application Program Interface (GSS-API): Delegate if Approved by Policy
Additional Kerberos Naming Constraints
Anonymity Support for Kerberos
A Generalized Framework for Kerberos Pre-Authentication
Using Kerberos Version 5 over the Transport Layer Security (TLS) Protocol
The Unencrypted Form of Kerberos 5 KRB-CRED Message
Kerberos Version 5 Generic Security Service Application Program Interface (GSS-API) Channel Binding Hash Agility
One-Time Password (OTP) Pre-Authentication
Deprecate DES, RC4-HMAC-EXP, and Other Weak Cryptographic Algorithms in Kerberos
Kerberos Options for DHCPv6
Camellia Encryption for Kerberos 5
Kerberos Principal Name Canonicalization and Cross-Realm Referrals
An Information Model for Kerberos Version 5
Further reading
External links
Kerberos Consortium
Kerberos page at MIT website
Kerberos Working Group at IETF website
Kerberos Sequence Diagram
Heimdal/Kerberos implementation
Authentication protocols
Computer access control protocols
Computer network security
Key transport protocols
Symmetric-key algorithms
Massachusetts Institute of Technology software |
17530 | https://en.wikipedia.org/wiki/Lattice | Lattice | Lattice may refer to:
Arts and design
Latticework, an ornamental criss-crossed framework, an arrangement of crossing laths or other thin strips of material
Lattice (music), an organized grid model of pitch ratios
Lattice (pastry), an ornamental pattern of crossing strips of pastry
Companies
Lattice Engines, a technology company specializing in business applications for marketing and sales
Lattice Group, a former British gas transmission business
Lattice Semiconductor, a US-based integrated circuit manufacturer
Science, technology, and mathematics
Mathematics
Lattice (group), a repeating arrangement of points
Lattice (discrete subgroup), a discrete subgroup of a topological group whose quotient carries an invariant finite Borel measure
Lattice (module), a module over a ring which is embedded in a vector space over a field
Lattice graph, a graph that can be drawn within a repeating arrangement of points
Lattice-based cryptography, encryption systems based on repeating arrangements of points
Lattice (order), a partially ordered set with unique least upper bounds and greatest lower bounds
Lattice-based access control, computer security systems based on partially ordered access privileges
Skew lattice, a non-commutative generalization of order-theoretic lattices
Lattice multiplication, a multiplication algorithm suitable for hand calculation
Other uses in science and technology
Bethe lattice, a regular infinite tree structure used in statistical mechanics
Crystal lattice or Bravais lattice, a repetitive arrangement of atoms
Lattice C, a compiler for the C programming language
Lattice mast, a type of observation mast common on major warships in the early 20th century
Lattice model (physics), a model defined not on a continuum, but on a grid
Lattice tower, or truss tower is a type of freestanding framework tower
Lattice truss bridge, a type of truss bridge that uses many closely spaced diagonal elements
Other uses
Lattice model (finance), a method for evaluating stock options that divides time into discrete intervals
See also
Grid (disambiguation)
Mesh (disambiguation)
Trellis (disambiguation) |
18031 | https://en.wikipedia.org/wiki/Leon%20Battista%20Alberti | Leon Battista Alberti | Leon Battista Alberti (; 14 February 1406 – 25 April 1472) was an Italian Renaissance humanist author, artist, architect, poet, priest, linguist, philosopher, and cryptographer; he epitomised the nature of those identified now as polymaths. He is considered the founder of Western cryptography, a claim he shares with Johannes Trithemius.
Although he often is characterized exclusively as an architect, as James Beck has observed, "to single out one of Leon Battista's 'fields' over others as somehow functionally independent and self-sufficient is of no help at all to any effort to characterize Alberti's extensive explorations in the fine arts". Although Alberti is known mostly for being an artist, he was also a mathematician of many sorts and made great advances to this field during the fifteenth century. The two most important buildings he designed are the churches of San Sebastiano (1460) and Sant'Andrea (1472), both in Mantua.
Alberti's life was described in Giorgio Vasari's Lives of the Most Excellent Painters, Sculptors, and Architects.
Biography
Early life
Leon Battista Alberti was born in 1406 in Genoa. His mother was Bianca Fieschi. His father, Benedetto Alberti, was a wealthy Florentine who had been exiled from his own city, but allowed to return in 1428. Alberti was sent to boarding school in Padua, then studied law at Bologna. He lived for a time in Florence, then in 1431 travelled to Rome, where he took holy orders and entered the service of the papal court. During this time he studied the ancient ruins, which excited his interest in architecture and strongly influenced the form of the buildings that he designed.
Alberti was gifted in many ways. He was tall, strong, and a fine athlete who could ride the wildest horse and jump over a person's head. He distinguished himself as a writer while still a child at school, and by the age of twenty had written a play that was successfully passed off as a genuine piece of Classical literature. In 1435 he began his first major written work, Della pittura, which was inspired by the burgeoning pictorial art in Florence in the early fifteenth century. In this work he analysed the nature of painting and explored the elements of perspective, composition, and colour.
In 1438 he began to focus more on architecture and was encouraged by the Marchese Leonello d'Este of Ferrara, for whom he built a small triumphal arch to support an equestrian statue of Leonello's father. In 1447 Alberti became architectural advisor to Pope Nicholas V and was involved in several projects at the Vatican.
First major commission
His first major architectural commission was in 1446 for the facade of the Rucellai Palace in Florence. This was followed in 1450 by a commission from Sigismondo Malatesta to transform the Gothic church of San Francesco in Rimini into a memorial chapel, the Tempio Malatestiano. In Florence, he designed the upper parts of the facade for the Dominican church of Santa Maria Novella, famously bridging the nave and lower aisles with two ornately inlaid scrolls, solving a visual problem and setting a precedent to be followed by architects of churches for four hundred years. In 1452, he completed De re aedificatoria, a treatise on architecture, using as its basis the work of Vitruvius and influenced by the archaeological remains of Rome. The work was not published until 1485. It was followed in 1464 by his less influential work, De statua, in which he examines sculpture. Alberti's only known sculpture is a self-portrait medallion, sometimes attributed to Pisanello.
Alberti was employed to design two churches in Mantua, San Sebastiano, which was never completed and for which Alberti's intention can only be speculated upon, and the Basilica of Sant'Andrea. The design for the latter church was completed in 1471, a year before Alberti's death, but was brought to completion and is his most significant work.
Alberti as artist
As an artist, Alberti distinguished himself from the ordinary craftsman educated in workshops. He was a humanist who followed Aristotle and Plotinus, and part of the rapidly expanding entourage of intellectuals and artisans supported by the courts of the princes and lords of the time. As a member of noble family and as part of the Roman curia, Alberti had special status. He was a welcomed guest at the Este court in Ferrara, and in Urbino he spent part of the hot-weather season with the soldier-prince Federico III da Montefeltro. The Duke of Urbino was a shrewd military commander, who generously spent money on the patronage of art. Alberti planned to dedicate his treatise on architecture to his friend.
Among Alberti's smaller studies, pioneering in their field, were a treatise in cryptography, De componendis cifris, and the first Italian grammar. With the Florentine cosmographer Paolo Toscanelli he collaborated in astronomy, a close science to geography at that time, and he produced a small Latin work on geography, Descriptio urbis Romae (The Panorama of the City of Rome). Just a few years before his death, Alberti completed De iciarchia (On Ruling the Household), a dialogue about Florence during the Medici rule.
Having taken holy orders, Alberti never married. He loved animals and had a pet dog, a mongrel, for whom he wrote a panegyric, (Canis). Vasari describes Alberti as "an admirable citizen, a man of culture... a friend of talented men, open and courteous with everyone. He always lived honourably and like the gentleman he was." Alberti died in Rome on 25 April 1472 at the age of 66.
Publications
Alberti regarded mathematics as a starting point for the discussion of art and the sciences. "To make clear my exposition in writing this brief commentary on painting," Alberti began his treatise, Della Pittura (On Painting) that he dedicated to Brunelleschi, "I will take first from the mathematicians those things with which my subject is concerned."
Della pittura (also known in Latin as De Pictura) relied on its scientific content on classical optics in determining perspective as a geometric instrument of artistic and architectural representation. Alberti was well-versed in the sciences of his age. His knowledge of optics was connected to the handed-down long-standing tradition of the Kitab al-manazir (The Optics; De aspectibus) of the Arab polymath Alhazen (Ibn al-Haytham, d. c. 1041), which was mediated by Franciscan optical workshops of the thirteenth-century Perspectivae traditions of scholars such as Roger Bacon, John Peckham, and Witelo (similar influences are also traceable in the third commentary of Lorenzo Ghiberti, Commentario terzo).
In both Della pittura and De statua, Alberti stressed that "all steps of learning should be sought from nature". The ultimate aim of an artist is to imitate nature. Painters and sculptors strive "through by different skills, at the same goal, namely that as nearly as possible the work they have undertaken shall appear to the observer to be similar to the real objects of nature". However, Alberti did not mean that artists should imitate nature objectively, as it is, but the artist should be especially attentive to beauty, "for in painting beauty is as pleasing as it is necessary". The work of art is, according to Alberti, so constructed that it is impossible to take anything away from it or to add anything to it, without impairing the beauty of the whole. Beauty was for Alberti "the harmony of all parts in relation to one another," and subsequently "this concord is realized in a particular number, proportion, and arrangement demanded by harmony". Alberti's thoughts on harmony were not new—they could be traced back to Pythagoras—but he set them in a fresh context, which fit in well with the contemporary aesthetic discourse.
In Rome, Alberti had plenty of time to study its ancient sites, ruins, and objects. His detailed observations, included in his De re aedificatoria (1452, On the Art of Building), were patterned after the De architectura by the Roman architect and engineer Vitruvius (fl. 46–30 BC). The work was the first architectural treatise of the Renaissance. It covered a wide range of subjects, from history to town planning, and engineering to the philosophy of beauty. De re aedificatoria, a large and expensive book, was not fully published until 1485, after which it became a major reference for architects. However, the book was written "not only for craftsmen but also for anyone interested in the noble arts", as Alberti put it. Originally published in Latin, the first Italian edition came out in 1546. and the standard Italian edition by Cosimo Bartoli was published in 1550. Pope Nicholas V, to whom Alberti dedicated the whole work, dreamed of rebuilding the city of Rome, but he managed to realize only a fragment of his visionary plans. Through his book, Alberti opened up his theories and ideals of the Florentine Renaissance to architects, scholars, and others.
Alberti wrote I Libri della famiglia—which discussed education, marriage, household management, and money—in the Tuscan dialect. The work was not printed until 1843. Like Erasmus decades later, Alberti stressed the need for a reform in education. He noted that "the care of very young children is women's work, for nurses or the mother", and that at the earliest possible age children should be taught the alphabet. With great hopes, he gave the work to his family to read, but in his autobiography Alberti confesses that "he could hardly avoid feeling rage, moreover, when he saw some of his relatives openly ridiculing both the whole work and the author's futile enterprise along it". Momus, written between 1443 and 1450, was a notable comedy about the Olympian deities. It has been considered as a roman à clef—Jupiter has been identified in some sources as Pope Eugenius IV and Pope Nicholas V. Alberti borrowed many of its characters from Lucian, one of his favorite Greek writers. The name of its hero, Momus, refers to the Greek word for blame or criticism. After being expelled from heaven, Momus, the god of mockery, is eventually castrated. Jupiter and the other deities come down to earth also, but they return to heaven after Jupiter breaks his nose in a great storm.
Architectural works
Alberti did not concern himself with the practicalities of building, and very few of his major works were brought to completion. As a designer and a student of Vitruvius and of ancient Roman remains, he grasped the nature of column and lintel architecture, from the visual rather than structural viewpoint, and correctly employed the Classical orders, unlike his contemporary, Brunelleschi, who used the Classical column and pilaster in a free interpretation. Among Alberti's concerns was the social effect of architecture, and to this end he was very well aware of the cityscape. This is demonstrated by his inclusion, at the Rucellai Palace, of a continuous bench for seating at the level of the basement. Alberti anticipated the principle of street hierarchy, with wide main streets connected to secondary streets, and buildings of equal height.
In Rome he was employed by Pope Nicholas V for the restoration of the Roman aqueduct of Acqua Vergine, which debouched into a simple basin designed by Alberti, which was swept away later by the Baroque Trevi Fountain.
In some studies, the authors propose that the Villa Medici in Fiesole might owe its design to Alberti, not to Michelozzo, and that it then became the prototype of the Renaissance villa. This hilltop dwelling, commissioned by Giovanni de' Medici, Cosimo il Vecchio's second son, with its view over the city, may be the very first example of a Renaissance villa: that is to say it follows the Albertian criteria for rendering a country dwelling a "villa suburbana". Under this perspective the Villa Medici in Fiesole could therefore be considered the "muse" for numerous other buildings, not only in the Florence area, which from the end of the fifteenth century onward find inspiration and creative innovation from it.
Tempio Malatestiano, Rimini
The Tempio Malatestiano in Rimini (1447, 1453–60) is the rebuilding of a Gothic church. The facade, with its dynamic play of forms, was left incomplete.
Façade of Palazzo Rucellai
The design of the façade of the Palazzo Rucellai (1446–51) was one of several commissions for the Rucellai family. The design overlays a grid of shallow pilasters and cornices in the Classical manner onto rusticated masonry, and is surmounted by a heavy cornice. The inner courtyard has Corinthian columns. The palace set a standard in the use of Classical elements that is original in civic buildings in Florence, and greatly influenced later palazzi. The work was executed by Bernardo Rosselino.
Santa Maria Novella
At Santa Maria Novella, Florence, between (1448–70) the upper facade was constructed to the design of Alberti. It was a challenging task, as the lower level already had three doorways and six Gothic niches containing tombs and employing the polychrome marble typical of Florentine churches, such as San Miniato al Monte and the Baptistery of Florence. The design also incorporates an ocular window that was already in place. Alberti introduced Classical features around the portico and spread the polychromy over the entire facade in a manner that includes Classical proportions and elements such as pilasters, cornices, and a pediment in the Classical style, ornamented with a sunburst in tesserae, rather than sculpture. The best known feature of this typically aisled church is the manner in which Alberti has solved the problem of visually bridging the different levels of the central nave and much lower side aisles. He employed two large scrolls, which were to become a standard feature of church facades in the later Renaissance, Baroque, and Classical Revival buildings.
Pienza
Alberti is considered to have been the consultant for the design of the Piazza Pio II, Pienza. The village, previously called Corsignano, was redesigned beginning around 1459. It was the birthplace of Aeneas Silvius Piccolomini, Pope Pius II, in whose employ Alberti served. Pius II wanted to use the village as a retreat, but needed for it to reflect the dignity of his position.
The piazza is a trapezoid shape defined by four buildings, with a focus on Pienza Cathedral and passages on either side opening onto a landscape view. The principal residence, Palazzo Piccolomini, is on the western side. It has three stories, articulated by pilasters and entablature courses, with a twin-lighted cross window set within each bay. This structure is similar to Alberti's Palazzo Rucellai in Florence and other later palaces. Noteworthy is the internal court of the palazzo. The back of the palace, to the south, is defined by loggia on all three floors that overlook an enclosed Italian Renaissance garden with Giardino all'italiana era modifications, and spectacular views into the distant landscape of the Val d'Orcia and Pope Pius's beloved Mount Amiata beyond. Below this garden is a vaulted stable that had stalls for a hundred horses. The design, which radically transformed the center of the town, included a palace for the pope, a church, a town hall, and a building for the bishops who would accompany the Pope on his trips. Pienza is considered an early example of Renaissance urban planning.
Sant' Andrea, Mantua
The Basilica of Sant'Andrea, Mantua was begun in 1471, the year before Alberti's death. It was brought to completion and is his most significant work employing the triumphal arch motif, both for its facade and interior, and influencing many works that were to follow. Alberti perceived the role of architect as designer. Unlike Brunelleschi, he had no interest in the construction, leaving the practicalities to builders and the oversight to others.
Other buildings
San Sebastiano, Mantua, (begun 1458) the unfinished facade of which has promoted much speculation as to Alberti's intention
Sepolcro Rucellai in San Pancrazio, 1467)
The Tribune for Santissima Annunziata, Florence (1470, completed with alterations, 1477)
Painting
Giorgio Vasari, who argued that historical progress in art reached its peak in Michelangelo, emphasized Alberti's scholarly achievements, not his artistic talents: "He spent his time finding out about the world and studying the proportions of antiquities; but above all, following his natural genius, he concentrated on writing rather than on applied work." Leonardo, who ironically called himself "an uneducated person" (omo senza lettere), followed Alberti in the view that painting is science. However, as a scientist, Leonardo was more empirical than Alberti, who was a theorist and did not have similar interest in practice. Alberti believed in ideal beauty, but Leonardo filled his notebooks with observations on human proportions, page after page, ending with his famous drawing of the Vitruvian man, a human figure related to a square and a circle.
In On Painting, Alberti uses the expression "We Painters", but as a painter, or sculptor, he was a dilettante. "In painting Alberti achieved nothing of any great importance or beauty", wrote Vasari. "The very few paintings of his that are extant are far from perfect, but this is not surprising since he devoted himself more to his studies than to draughtsmanship." Jacob Burckhardt portrayed Alberti in The Civilization of the Renaissance in Italy as a truly universal genius. "And Leonardo Da Vinci was to Alberti as the finisher to the beginner, as the master to the dilettante. Would only that Vasari's work were here supplemented by a description like that of Alberti! The colossal outlines of Leonardo's nature can never be more than dimly and distantly conceived."
Alberti is said to appear in Mantegna's great frescoes in the Camera degli Sposi, as the older man dressed in dark red clothes, who whispers in the ear of Ludovico Gonzaga, the ruler of Mantua. In Alberti's self-portrait, a large plaquette, he is clothed as a Roman. To the left of his profile is a winged eye. On the reverse side is the question, Quid tum? (what then), taken from Virgil's Eclogues: "So what, if Amyntas is dark? (quid tum si fuscus Amyntas?) Violets are black, and hyacinths are black."
Contributions
Alberti made a variety of contributions to several fields:
Alberti was the creator of a theory called "historia". In his treatise De pictura (1435) he explains the theory of the accumulation of people, animals, and buildings, which create harmony amongst each other, and "hold the eye of the learned and unlearned spectator for a long while with a certain sense of pleasure and emotion". De pictura ("On Painting") contained the first scientific study of perspective. An Italian translation of De pictura (Della pittura) was published in 1436, one year after the original Latin version and addressed Filippo Brunelleschi in the preface. The Latin version had been dedicated to Alberti's humanist patron, Gianfrancesco Gonzaga of Mantua. He also wrote works on sculpture, De statua.
Alberti used his artistic treatises to propound a new humanistic theory of art. He drew on his contacts with early Quattrocento artists such as Brunelleschi, Donatello, and Ghiberti to provide a practical handbook for the renaissance artist.
Alberti wrote an influential work on architecture, De re aedificatoria, which by the sixteenth century had been translated into Italian (by Cosimo Bartoli), French, Spanish, and English. An English translation was by Giacomo Leoni in the early eighteenth century. Newer translations are now available.
Whilst Alberti's treatises on painting and architecture have been hailed as the founding texts of a new form of art, breaking from the Gothic past, it is impossible to know the extent of their practical impact within his lifetime. His praise of the Calumny of Apelles led to several attempts to emulate it, including paintings by Botticelli and Signorelli. His stylistic ideals have been put into practice in the works of Mantegna, Piero della Francesca, and Fra Angelico. But how far Alberti was responsible for these innovations and how far he was simply articulating the trends of the artistic movement, with which his practical experience had made him familiar, is impossible to ascertain.
He was so skilled in Latin verse that a comedy he wrote in his twentieth year, entitled Philodoxius, would later deceive the younger Aldus Manutius, who edited and published it as the genuine work of 'Lepidus Comicus'.
He has been credited with being the author, or alternatively, the designer of the woodcut illustrations, of the Hypnerotomachia Poliphili, a strange fantasy novel.
Apart from his treatises on the arts, Alberti also wrote: Philodoxus ("Lover of Glory", 1424), De commodis litterarum atque incommodis ("On the Advantages and Disadvantages of Literary Studies", 1429), Intercoenales ("Table Talk", c. 1429), Della famiglia ("On the Family", begun 1432), Vita S. Potiti ("Life of St. Potitus", 1433), De iure (On Law, 1437), Theogenius ("The Origin of the Gods", c. 1440), Profugorium ab aerumna ("Refuge from Mental Anguish",), Momus (1450), and De Iciarchia ("On the Prince", 1468). These and other works were translated and printed in Venice by the humanist Cosimo Bartoli in 1586.
Alberti was an accomplished cryptographer by the standard of his day and invented the first polyalphabetic cipher, which is now known as the Alberti cipher, and machine-assisted encryption using his Cipher Disk. The polyalphabetic cipher was, at least in principle (for it was not properly used for several hundred years) the most significant advance in cryptography since before Julius Caesar's time. Cryptography historian David Kahn entitles him the "Father of Western Cryptography", pointing to three significant advances in the field that can be attributed to Alberti: "the earliest Western exposition of cryptanalysis, the invention of polyalphabetic substitution, and the invention of enciphered code".
According to Alberti, in a short autobiography written c. 1438 in Latin and in the third person, (many but not all scholars consider this work to be an autobiography) he was capable of "standing with his feet together, and springing over a man's head." The autobiography survives thanks to an eighteenth-century transcription by Antonio Muratori. Alberti also claimed that he "excelled in all bodily exercises; could, with feet tied, leap over a standing man; could in the great cathedral, throw a coin far up to ring against the vault; amused himself by taming wild horses and climbing mountains". Needless to say, many in the Renaissance promoted themselves in various ways and Alberti's eagerness to promote his skills should be understood, to some extent, within that framework. (This advice should be followed in reading the above information, some of which originates in this so-called autobiography.)
Alberti claimed in his "autobiography" to be an accomplished musician and organist, but there is no hard evidence to support this claim. In fact, musical posers were not uncommon in his day (see the lyrics to the song Musica Son, by Francesco Landini, for complaints to this effect.) He held the appointment of canon in the metropolitan church of Florence, and thus – perhaps – had the leisure to devote himself to this art, but this is only speculation. Vasari also agreed with this.
He was interested in the drawing of maps and worked with the astronomer, astrologer, and cartographer Paolo Toscanelli.
In terms of Aesthetics Alberti is one of the first defining the work of art as imitation of nature, exactly as a selection of its most beautiful parts: "So let's take from nature what we are going to paint, and from nature we choose the most beautiful and worthy things".
Works in print
De Pictura, 1435. On Painting, in English, De Pictura, in Latin, ; Della Pittura, in Italian (1804 [1434]).
Momus, Latin text and English translation, 2003
De re aedificatoria (1452, Ten Books on Architecture). Alberti, Leon Battista. De re aedificatoria. On the art of building in ten books. (translated by Joseph Rykwert, Robert Tavernor and Neil Leach). Cambridge, Mass.: MIT Press, 1988. . . Latin, French and Italian editions and in English translation.
De Cifris A Treatise on Ciphers (1467), trans. A. Zaccagnini. Foreword by David Kahn, Galimberti, Torino 1997.
"Leon Battista Alberti. On Painting. A New Translation an Critical Edition", Edited and Translated by Rocco Sinisgalli, Cambridge University Press, New York, May 2011, , (books.google.de)
I libri della famiglia, Italian edition
"Dinner pieces". A Translation of the Intercenales by David Marsh. Center for Medieval and Early Renaissance Studies, State University of New York, Binghamton 1987.
"Descriptio urbis Romae. Leon Battista Alberti's Delineation of the city of Rome". Peter Hicks, Arizona Board of Regents for Arizona State university 2007.
Legacy
Borsi states that Alberti's writings on architecture continue to influence modern and contemporary architecture stating: "The organicism and nature-worship of Wright, the neat classicism of van der Mies, the regulatory outlines and anthropomorphic, harmonic, modular systems of Le Corbusier, and Kahn's revival of the 'antique' are all elements that tempt one to trace Alberti's influence on modern architecture."
In popular culture
Leon Battista Alberti is a major character in Roberto Rossellini's three-part television film The Age of the Medici (1973), with the third and final part, Leon Battista Alberti: Humanism, centering on him, his works (such as Santa Maria Novella), and his thought. He is played by Italian actor Virginio Gazzolo.
Mentioned in the 1994 film Renaissance Man or Army Intelligence starring Danny DeVito.
Mentioned in the 2004 book The Rule of Four by Ian Caldwell and Dustin Thomason
Notes
References
Magda Saura, "Building codes in the architectural treatise De re aedificatoria,"
Third International Congress on Construction History, Cottbus, May 2009.
http://hdl.handle.net/2117/14252
Further reading
Clark, Kenneth. "Leon Battista Alberti: a Renaissance Personality." History Today (July 1951) 1#7 pp 11-18 online
Francesco Borsi, Leon Battista Alberti. Das Gesamtwerk. Stuttgart 1982
Günther Fischer, Leon Battista Alberti. Sein Leben und seine Architekturtheorie. Wissenschaftliche Buchgesellschaft Darmstadt 2012
Fontana-Giusti, Korolija Gordana, "The Cutting Surface: On Perspective as a Section, Its Relationship to Writing, and Its Role in Understanding Space" AA Files No. 40 (Winter 1999), pp. 56–64 London: Architectural Association School of Architecture.
Fontana-Giusti, Gordana. "Walling and the city: the effects of walls and walling within the city space", The Journal of Architecture pp 309–45 Volume 16, Issue 3, London & New York: Routledge, 2011.
Anthony Grafton, Leon Battista Alberti. Master Builder of the Italian Renaissance. New York 2000
Mark Jarzombek, “The Structural Problematic of Leon Battista Alberti's De pictura”, Renaissance Studies 4/3 (September 1990): 273–285.
Michel Paoli, Leon Battista Alberti, Torino 2007
Les Livres de la famille d'Alberti, Sources, sens et influence, sous la direction de Michel Paoli, avec la collaboration d'Elise Leclerc et Sophie Dutheillet de Lamothe, préface de Françoise Choay, Paris, Classiques Garnier, 2013.
Manfredo Tafuri, Interpreting the Renaissance: Princes, Cities, Architects, trans. Daniel Sherer. New Haven 2006.
Robert Tavernor, On Alberti and the Art of Building. New Haven and London: Yale University Press, 1998. .
Vasari, The Lives of the Artists Oxford University Press, 1998.
Wright, D.R. Edward, "Alberti's De Pictura: Its Literary Structure and Purpose", Journal of the Warburg and Courtauld Institutes, Vol. 47, 1984 (1984), pp. 52–71.
LA) Leon Battista Alberti, De re aedificatoria, Argentorati, excudebat M. Iacobus Cammerlander Moguntinus, 1541.
(LA) Leon Battista Alberti, De re aedificatoria, Florentiae, accuratissime impressum opera magistri Nicolai Laurentii Alamani.
Leon Battista Alberti, Opere volgari. 1, Firenze, Tipografia Galileiana, 1843.
Leon Battista Alberti, Opere volgari. 2, Firenze, Tipografia Galileiana, 1844.
Leon Battista Alberti, Opere volgari. 4, Firenze, Tipografia Galileiana, 1847.
Leon Battista Alberti, Opere volgari. 5, Firenze, Tipografia Galileiana, 1849.
Leon Battista Alberti, Opere, Florentiae, J. C. Sansoni, 1890.
Leon Battista Alberti, Trattati d'arte, Bari, Laterza, 1973.
Leon Battista Alberti, Ippolito e Leonora, Firenze, Bartolomeo de' Libri, prima del 1495.
Leon Battista Alberti, Ecatonfilea, Stampata in Venesia, per Bernardino da Cremona, 1491.
Leon Battista Alberti, Deifira, Padova, Lorenzo Canozio, 1471.
Leon Battista Alberti, Teogenio, Milano, Leonard Pachel, circa 1492.
Leon Battista Alberti, Libri della famiglia, Bari, G. Laterza, 1960.
Leon Battista Alberti, Rime e trattati morali, Bari, Laterza, 1966.
Albertiana, Rivista della Société Intérnationale Leon Battista Alberti, Firenze, Olschki, 1998 sgg.
Franco Borsi, Leon Battista Alberti: Opera completa, Electa, Milano, 1973;
Giovanni Ponte, Leon Battista Alberti: Umanista e scrittore, Tilgher, Genova, 1981;
Paolo Marolda, Crisi e conflitto in Leon Battista Alberti, Bonacci, Roma, 1988;
Roberto Cardini, Mosaici: Il nemico dell'Alberti, Bulzoni, Roma 1990;
Rosario Contarino, Leon Battista Alberti moralista, presentazione di Francesco Tateo, S. Sciascia, Caltanissetta 1991;
Pierluigi Panza, Leon Battista Alberti: Filosofia e teoria dell'arte, introduzione di Dino Formaggio, Guerini, Milano 1994;
Cecil Grayson, Studi su Leon Battista Alberti, a cura di Paola Claut, Olschki, Firenze 1998;
Stefano Borsi, Momus, o Del principe: Leon Battista Alberti, i papi, il giubileo, Polistampa, Firenze 1999;
Luca Boschetto, Leon Battista Alberti e Firenze: Biografia, storia, letteratura, Olschki, Firenze 2000;
Alberto G. Cassani, La fatica del costruire: Tempo e materia nel pensiero di Leon Battista Alberti, Unicopli, Milano 2000;
Elisabetta Di Stefano, L'altro sapere: Bello, arte, immagine in Leon Battista Alberti, Centro internazionale studi di estetica, Palermo 2000;
Rinaldo Rinaldi, Melancholia Christiana. Studi sulle fonti di Leon Battista Alberti, Firenze, Olschki, 2002;
Francesco Furlan, Studia albertiana: Lectures et lecteurs de L.B. Alberti, N. Aragno-J. Vrin, Torino-Parigi 2003;
Anthony Grafton, Leon Battista Alberti: Un genio universale, Laterza, Roma-Bari 2003;
D. Mazzini, S. Martini. Villa Medici a Fiesole. Leon Battista Alberti e il prototipo di villa rinascimentale, Centro Di, Firenze 2004;
Michel Paoli, Leon Battista Alberti 1404–1472, Parigi, Editions de l'Imprimeur, 2004, , ora tradotto in italiano: Michel Paoli, Leon Battista Alberti, Bollati Boringhieri, Torino 2007, 124 p. + 40 ill., .
Anna Siekiera, Bibliografia linguistica albertiana, Firenze, Edizioni Polistampa, 2004 (Edizione Nazionale delle Opere di Leon Battista Alberti, Serie «Strumenti», 2);
Francesco P. Fiore: La Roma di Leon Battista Alberti. Umanisti, architetti e artisti alla scoperta dell'antico nella città del Quattrocento, Skira, Milano 2005, ;
Leon Battista Alberti architetto, a cura di Giorgio Grassi e Luciano Patetta, testi di Giorgio Grassi et alii, Banca CR, Firenze 2005;
Restaurare Leon Battista Alberti: il caso di Palazzo Rucellai, a cura di Simonetta Bracciali, presentazione di Antonio Paolucci, Libreria Editrice Fiorentina, Firenze 2006, ;
Stefano Borsi, Leon Battista Alberti e Napoli, Polistampa, Firenze 2006;
Gabriele Morolli, Leon Battista Alberti. Firenze e la Toscana, Maschietto Editore, Firenze, 2006.ù
F. Canali, "Leon Battista Alberti "Camaleonta" e l'idea del Tempio Malatestiano dalla Storiografia al Restauro, in Il Tempio della Meraviglia, a cura di F. Canali, C. Muscolino, Firenze, 2007.
F. Canali, La facciata del Tempio Malatestiano, in Il Tempio della Meraviglia, a cura di F. Canali, C. Muscolino, Firenze, 2007.
V. C. Galati, "Ossa" e "illigamenta" nel De Re aedificatoria. Caratteri costruttivi e ipotesi strutturali nella lettura della tecnologia antiquaria del cantiere del Tempio Malatestiano, in Il Tempio della Meraviglia, a cura di F. Canali, C. Muscolino, Firenze, 2007.
Alberti e la cultura del Quattrocento, Atti del Convegno internazionale di Studi, (Firenze, Palazzo Vecchio, Salone dei Dugento, 16-17-18 dicembre 2004), a cura di R. Cardini e M. Regoliosi, Firenze, Edizioni Polistampa, 2007.
AA.VV, Brunelleschi, Alberti e oltre, a cura di F. Canali, «Bollettino della Società di Studi Fiorentini», 16–17, 2008.
F. Canali, R Tracce albertiane nella Romagna umanistica tra Rimini e Faenza, in Brunelleschi, Alberti e oltre, a cura di F. Canali, «Bollettino della Società di Studi Fiorentini», 16–17, 2008.
V. C. Galati, Riflessioni sulla Reggia di Castelnuovo a Napoli: morfologie architettoniche e tecniche costruttive. Un univoco cantiere antiquario tra Donatello e Leon Battista Alberti?, in Brunelleschi, Alberti e oltre, a cura di F. Canali, «Bollettino della Società di Studi Fiorentini», 16–17, 2008.
F. Canali, V. C. Galati, Leon Battista Alberti, gli 'Albertiani' e la Puglia umanistica, in Brunelleschi, Alberti e oltre, a cura di F. Canali, «Bollettino della Società di Studi Fiorentini», 16–17, 2008.
G. Morolli, Alberti: la triiplice luce della pulcritudo, in Brunelleschi, Alberti e oltre, a cura di F. Canali, «Bollettino della Società di Studi Fiorentini», 16–17, 2008.
G. Morolli, Pienza e Alberti, in Brunelleschi, Alberti e oltre, a cura di F. Canali, «Bollettino della Società di Studi Fiorentini», 16–17, 2008.
Christoph Luitpold Frommel, Alberti e la porta trionfale di Castel Nuovo a Napoli, in «Annali di architettura» n° 20, Vicenza 2008 leggere l'articolo;
Massimo Bulgarelli, Leon Battista Alberti, 1404-1472: Architettura e storia, Electa, Milano 2008;
Caterina Marrone, I segni dell'inganno. Semiotica della crittografia, Stampa Alternativa&Graffiti, Viterbo 2010;
S. Borsi, Leon Battista Alberti e Napoli, Firenze, 2011.
V. Galati, Il Torrione quattrocentesco di Bitonto dalla committenza di Giovanni Ventimiglia e Marino Curiale; dagli adeguamenti ai dettami del De Re aedificatoria di Leon Battista Alberti alle proposte di Francesco di Giorgio Martini (1450-1495), in Defensive Architecture of the Mediterranean XV to XVIII centuries, a cura di G. Verdiani, Firenze, 2016, vol.III.
V. Galati, Tipologie di Saloni per le udienze nel Quattrocento tra Ferrara e Mantova. Oeci, Basiliche, Curie e "Logge all'antica" tra Vitruvio e Leon Battista Alberti nel "Salone dei Mesi di Schifanoia a Ferrara e nella "Camera Picta" di Palazzo Ducale a Mantova, in Per amor di Classicismo, a cura di F. Canali «Bollettino della Società di Studi Fiorentini», 24–25, 2016.
S. Borsi, Leon Battista, Firenze, 2018.
External links
Albertian Bibliography on line
MS Typ 422.2. Alberti, Leon Battista, 1404–1472. Ex ludis rerum mathematicarum : manuscript, [14--]. Houghton Library, Harvard University.
Palladio's Literary Predecessors
"Learning from the City-States? Leon Battista Alberti and the London Riots", Caspar Pearson, Berfrois, September 26, 2011
Online resources for Alberti's buildings
Alberti Photogrammetric Drawings
S. Andrea, Mantua, Italy
Sta. Maria Novella, Florence, Italy
Alberti's works online
De pictura/Della pittura, original Latin and Italian texts (English translation)
Libri della famiglia – Libro 3 – Dignità del volgare on audio MP3
Momus, (printed in Rome in 1520), full digital facsimile, CAMENA Project
The Architecture of Leon Battista Alberti in Ten Books, (printed in London in 1755), full digital facsimile, Linda Hall Library
Works of Alberti, book facsimiles via archive.org
Leon Battista Alberti
1404 births
1472 deaths
15th-century Genoese people
15th-century Italian Roman Catholic priests
15th-century Latin writers
15th-century philosophers
15th-century Italian architects
15th-century Italian painters
15th-century Italian poets
15th-century Italian sculptors
15th-century Italian mathematicians
Italian Renaissance architects
Italian Renaissance humanists
Italian Renaissance painters
Italian Renaissance writers
Architectural theoreticians
Italian architecture writers
Italian medallists
Italian philosophers
Italian male painters
Italian male poets
Italian male sculptors
Linguists from Italy
Catholic philosophers
Artist authors
Pre-19th-century cryptographers
15th-century antiquarians |
18209 | https://en.wikipedia.org/wiki/Lossless%20compression | Lossless compression | Lossless compression is a class of data compression that allows the original data to be perfectly reconstructed from the compressed data. By contrast, lossy compression permits reconstruction only of an approximation of the original data, though usually with greatly improved compression rates (and therefore reduced media sizes).
By operation of the pigeonhole principle, no lossless compression algorithm can efficiently compress all possible data. For this reason, many different algorithms exist that are designed either with a specific type of input data in mind or with specific assumptions about what kinds of redundancy the uncompressed data are likely to contain. Therefore, compression ratios tend to be stronger on human- and machine-readable documents and code in comparison to entropic binary data (random bytes).
Lossless data compression is used in many applications. For example, it is used in the ZIP file format and in the GNU tool gzip. It is also often used as a component within lossy data compression technologies (e.g. lossless mid/side joint stereo preprocessing by MP3 encoders and other lossy audio encoders).
Lossless compression is used in cases where it is important that the original and the decompressed data be identical, or where deviations from the original data would be unfavourable. Typical examples are executable programs, text documents, and source code. Some image file formats, like PNG or GIF, use only lossless compression, while others like TIFF and MNG may use either lossless or lossy methods. Lossless audio formats are most often used for archiving or production purposes, while smaller lossy audio files are typically used on portable players and in other cases where storage space is limited or exact replication of the audio is unnecessary.
Lossless compression techniques
Most lossless compression programs do two things in sequence: the first step generates a statistical model for the input data, and the second step uses this model to map input data to bit sequences in such a way that "probable" (e.g. frequently encountered) data will produce shorter output than "improbable" data.
The primary encoding algorithms used to produce bit sequences are Huffman coding (also used by the deflate algorithm) and arithmetic coding. Arithmetic coding achieves compression rates close to the best possible for a particular statistical model, which is given by the information entropy, whereas Huffman compression is simpler and faster but produces poor results for models that deal with symbol probabilities close to 1.
There are two primary ways of constructing statistical models: in a static model, the data is analyzed and a model is constructed, then this model is stored with the compressed data. This approach is simple and modular, but has the disadvantage that the model itself can be expensive to store, and also that it forces using a single model for all data being compressed, and so performs poorly on files that contain heterogeneous data. Adaptive models dynamically update the model as the data is compressed. Both the encoder and decoder begin with a trivial model, yielding poor compression of initial data, but as they learn more about the data, performance improves. Most popular types of compression used in practice now use adaptive coders.
Lossless compression methods may be categorized according to the type of data they are designed to compress. While, in principle, any general-purpose lossless compression algorithm (general-purpose meaning that they can accept any bitstring) can be used on any type of data, many are unable to achieve significant compression on data that are not of the form for which they were designed to compress. Many of the lossless compression techniques used for text also work reasonably well for indexed images.
Multimedia
These techniques take advantage of the specific characteristics of images such as the common phenomenon of contiguous 2-D areas of similar tones.
Every pixel but the first is replaced by the difference to its left neighbor. This leads to small values having a much higher probability than large values.
This is often also applied to sound files, and can compress files that contain mostly low frequencies and low volumes.
For images, this step can be repeated by taking the difference to the top pixel, and then in videos, the difference to the pixel in the next frame can be taken.
A hierarchical version of this technique takes neighboring pairs of data points, stores their difference and sum, and on a higher level with lower resolution continues with the sums. This is called discrete wavelet transform. JPEG2000 additionally uses data points from other pairs and multiplication factors to mix them into the difference. These factors must be integers, so that the result is an integer under all circumstances. So the values are increased, increasing file size, but hopefully the distribution of values is more peaked.
The adaptive encoding uses the probabilities from the previous sample in sound encoding, from the left and upper pixel in image encoding, and additionally from the previous frame in video encoding. In the wavelet transformation, the probabilities are also passed through the hierarchy.
Historical legal issues
Many of these methods are implemented in open-source and proprietary tools, particularly LZW and its variants. Some algorithms are patented in the United States and other countries and their legal usage requires licensing by the patent holder. Because of patents on certain kinds of LZW compression, and in particular licensing practices by patent holder Unisys that many developers considered abusive, some open source proponents encouraged people to avoid using the Graphics Interchange Format (GIF) for compressing still image files in favor of Portable Network Graphics (PNG), which combines the LZ77-based deflate algorithm with a selection of domain-specific prediction filters. However, the patents on LZW expired on June 20, 2003.
Many of the lossless compression techniques used for text also work reasonably well for indexed images, but there are other techniques that do not work for typical text that are useful for some images (particularly simple bitmaps), and other techniques that take advantage of the specific characteristics of images (such as the common phenomenon of contiguous 2-D areas of similar tones, and the fact that color images usually have a preponderance of a limited range of colors out of those representable in the color space).
As mentioned previously, lossless sound compression is a somewhat specialized area. Lossless sound compression algorithms can take advantage of the repeating patterns shown by the wave-like nature of the data – essentially using autoregressive models to predict the "next" value and encoding the (hopefully small) difference between the expected value and the actual data. If the difference between the predicted and the actual data (called the error) tends to be small, then certain difference values (like 0, +1, −1 etc. on sample values) become very frequent, which can be exploited by encoding them in few output bits.
It is sometimes beneficial to compress only the differences between two versions of a file (or, in video compression, of successive images within a sequence). This is called delta encoding (from the Greek letter Δ, which in mathematics, denotes a difference), but the term is typically only used if both versions are meaningful outside compression and decompression. For example, while the process of compressing the error in the above-mentioned lossless audio compression scheme could be described as delta encoding from the approximated sound wave to the original sound wave, the approximated version of the sound wave is not meaningful in any other context.
Lossless compression methods
No lossless compression algorithm can efficiently compress all possible data (see the section Limitations below for details). For this reason, many different algorithms exist that are designed either with a specific type of input data in mind or with specific assumptions about what kinds of redundancy the uncompressed data are likely to contain.
Some of the most common lossless compression algorithms are listed below.
General purpose
Run-length encoding (RLE) – Simple scheme that provides good compression of data containing many runs of the same value
Huffman coding – Entropy encoding, pairs well with other algorithms
Arithmetic coding – Entropy encoding
ANS – Entropy encoding, used by LZFSE and Zstandard
Lempel-Ziv compression (LZ77 and LZ78) – Dictionary-based algorithm that forms the basis for many other algorithms
Lempel–Ziv–Storer–Szymanski (LZSS) – Used by WinRAR in tandem with Huffman coding
Deflate – Combines LZSS compression with Huffman coding, used by ZIP, gzip, and PNG images
Lempel–Ziv–Welch (LZW) – Used by GIF images and Unix's compress utility
Lempel–Ziv–Markov chain algorithm (LZMA) – Very high compression ratio, used by 7zip and xz
Burrows–Wheeler transform reversible transform for making textual data more compressible, used by bzip2
Prediction by partial matching (PPM) – Optimized for compressing plain text
Audio
Apple Lossless (ALAC – Apple Lossless Audio Codec)
Adaptive Transform Acoustic Coding (ATRAC)
Audio Lossless Coding (also known as MPEG-4 ALS)
Direct Stream Transfer (DST)
Dolby TrueHD
DTS-HD Master Audio
Free Lossless Audio Codec (FLAC)
Meridian Lossless Packing (MLP)
Monkey's Audio (Monkey's Audio APE)
MPEG-4 SLS (also known as HD-AAC)
OptimFROG
Original Sound Quality (OSQ)
RealPlayer (RealAudio Lossless)
Shorten (SHN)
TTA (True Audio Lossless)
WavPack (WavPack lossless)
WMA Lossless (Windows Media Lossless)
Raster graphics
AVIF – AOMedia Video 1 Image File Format
FLIF – Free Lossless Image Format
HEIF – High Efficiency Image File Format (lossless or lossy compression, using HEVC)
ILBM – (lossless RLE compression of Amiga IFF images)
JBIG2 – (lossless or lossy compression of B&W images)
JPEG 2000 – (includes lossless compression method via LeGall-Tabatabai 5/3 reversible integer wavelet transform)
JPEG-LS – (lossless/near-lossless compression standard)
JPEG XL – (lossless or lossy compression)
JPEG XR – formerly WMPhoto and HD Photo, includes a lossless compression method
LDCT – Lossless Discrete Cosine Transform
PCX – PiCture eXchange
PDF – Portable Document Format (lossless or lossy compression)
PNG – Portable Network Graphics
TGA – Truevision TGA
TIFF – Tagged Image File Format (lossless or lossy compression)
WebP – (lossless or lossy compression of RGB and RGBA images)
3D Graphics
OpenCTM – Lossless compression of 3D triangle meshes
Video
See list of lossless video codecs
Cryptography
Cryptosystems often compress data (the "plaintext") before encryption for added security. When properly implemented, compression greatly increases the unicity distance by removing patterns that might facilitate cryptanalysis. However, many ordinary lossless compression algorithms produce headers, wrappers, tables, or other predictable output that might instead make cryptanalysis easier. Thus, cryptosystems must utilize compression algorithms whose output does not contain these predictable patterns.
Genetics and Genomics
Genetics compression algorithms (not to be confused with genetic algorithms) are the latest generation of lossless algorithms that compress data (typically sequences of nucleotides) using both conventional compression algorithms and specific algorithms adapted to genetic data. In 2012, a team of scientists from Johns Hopkins University published the first genetic compression algorithm that does not rely on external genetic databases for compression. HAPZIPPER was tailored for HapMap data and achieves over 20-fold compression (95% reduction in file size), providing 2- to 4-fold better compression much faster than leading general-purpose compression utilities.
Genomic sequence compression algorithms, also known as DNA sequence compressors, explore the fact that DNA sequences have characteristic properties, such as inverted repeats. The most successful compressors are XM and GeCo. For eukaryotes XM is slightly better in compression ratio, though for sequences larger than 100 MB its computational requirements are impractical.
Executables
Self-extracting executables contain a compressed application and a decompressor. When executed, the decompressor transparently decompresses and runs the original application. This is especially often used in demo coding, where competitions are held for demos with strict size limits, as small as 1k.
This type of compression is not strictly limited to binary executables, but can also be applied to scripts, such as JavaScript.
Lossless compression benchmarks
Lossless compression algorithms and their implementations are routinely tested in head-to-head benchmarks. There are a number of better-known compression benchmarks. Some benchmarks cover only the data compression ratio, so winners in these benchmarks may be unsuitable for everyday use due to the slow speed of the top performers. Another drawback of some benchmarks is that their data files are known, so some program writers may optimize their programs for best performance on a particular data set. The winners on these benchmarks often come from the class of context-mixing compression software.
Matt Mahoney, in his February 2010 edition of the free booklet Data Compression Explained, additionally lists the following:
The Calgary Corpus dating back to 1987 is no longer widely used due to its small size. Matt Mahoney maintained the Calgary Compression Challenge, created and maintained from May 21, 1996, through May 21, 2016, by Leonid A. Broukhis.
The Large Text Compression Benchmark and the similar Hutter Prize both use a trimmed Wikipedia XML UTF-8 data set.
The Generic Compression Benchmark, maintained by Matt Mahoney, tests compression of data generated by random Turing machines.
Sami Runsas (the author of NanoZip) maintained Compression Ratings, a benchmark similar to Maximum Compression multiple file test, but with minimum speed requirements. It offered the calculator that allowed the user to weight the importance of speed and compression ratio. The top programs were fairly different due to the speed requirement. In January 2010, the top program was NanoZip followed by FreeArc, CCM, flashzip, and 7-Zip.
The Monster of Compression benchmark by Nania Francesco Antonio tested compression on 1Gb of public data with a 40-minute time limit. In December 2009, the top ranked archiver was NanoZip 0.07a and the top ranked single file compressor was ccmx 1.30c.
The Compression Ratings website published a chart summary of the "frontier" in compression ratio and time.
The Compression Analysis Tool is a Windows application that enables end users to benchmark the performance characteristics of streaming implementations of LZF4, Deflate, ZLIB, GZIP, BZIP2 and LZMA using their own data. It produces measurements and charts with which users can compare the compression speed, decompression speed and compression ratio of the different compression methods and to examine how the compression level, buffer size and flushing operations affect the results.
Limitations
Lossless data compression algorithms (that do not attach compression id labels to their output data sets) cannot guarantee compression for all input data sets. In other words, for any lossless data compression algorithm, there will be an input data set that does not get smaller when processed by the algorithm, and for any lossless data compression algorithm that makes at least one file smaller, there will be at least one file that it makes larger. This is easily proven with elementary mathematics using a counting argument called the pigeonhole principle, as follows:
Assume that each file is represented as a string of bits of some arbitrary length.
Suppose that there is a compression algorithm that transforms every file into an output file that is no longer than the original file, and that at least one file will be compressed into an output file that is shorter than the original file.
Let M be the least number such that there is a file F with length M bits that compresses to something shorter. Let N be the length (in bits) of the compressed version of F.
Because N<M, every file of length N keeps its size during compression. There are 2N such files possible. Together with F, this makes 2N+1 files that all compress into one of the 2N files of length N.
But 2N is smaller than 2N+1, so by the pigeonhole principle there must be some file of length N that is simultaneously the output of the compression function on two different inputs. That file cannot be decompressed reliably (which of the two originals should that yield?), which contradicts the assumption that the algorithm was lossless.
We must therefore conclude that our original hypothesis (that the compression function makes no file longer) is necessarily untrue.
Most practical compression algorithms provide an "escape" facility that can turn off the normal coding for files that would become longer by being encoded. In theory, only a single additional bit is required to tell the decoder that the normal coding has been turned off for the entire input; however, most encoding algorithms use at least one full byte (and typically more than one) for this purpose. For example, deflate compressed files never need to grow by more than 5 bytes per 65,535 bytes of input.
In fact, if we consider files of length N, if all files were equally probable, then for any lossless compression that reduces the size of some file, the expected length of a compressed file (averaged over all possible files of length N) must necessarily be greater than N. So if we know nothing about the properties of the data we are compressing, we might as well not compress it at all. A lossless compression algorithm is useful only when we are more likely to compress certain types of files than others; then the algorithm could be designed to compress those types of data better.
Thus, the main lesson from the argument is not that one risks big losses, but merely that one cannot always win. To choose an algorithm always means implicitly to select a subset of all files that will become usefully shorter. This is the theoretical reason why we need to have different compression algorithms for different kinds of files: there cannot be any algorithm that is good for all kinds of data.
The "trick" that allows lossless compression algorithms, used on the type of data they were designed for, to consistently compress such files to a shorter form is that the files the algorithms are designed to act on all have some form of easily modeled redundancy that the algorithm is designed to remove, and thus belong to the subset of files that that algorithm can make shorter, whereas other files would not get compressed or even get bigger. Algorithms are generally quite specifically tuned to a particular type of file: for example, lossless audio compression programs do not work well on text files, and vice versa.
In particular, files of random data cannot be consistently compressed by any conceivable lossless data compression algorithm; indeed, this result is used to define the concept of randomness in Kolmogorov complexity.
It is provably impossible to create an algorithm that can losslessly compress any data. While there have been many claims through the years of companies achieving "perfect compression" where an arbitrary number N of random bits can always be compressed to N − 1 bits, these kinds of claims can be safely discarded without even looking at any further details regarding the purported compression scheme. Such an algorithm contradicts fundamental laws of mathematics because, if it existed, it could be applied repeatedly to losslessly reduce any file to length 1.
On the other hand, it has also been proven that there is no algorithm to determine whether a file is incompressible in the sense of Kolmogorov complexity. Hence it is possible that any particular file, even if it appears random, may be significantly compressed, even including the size of the decompressor. An example is the digits of the mathematical constant pi, which appear random but can be generated by a very small program. However, even though it cannot be determined whether a particular file is incompressible, a simple theorem about incompressible strings shows that over 99% of files of any given length cannot be compressed by more than one byte (including the size of the decompressor).
Mathematical background
Abstractly, a compression algorithm can be viewed as a function on sequences (normally of octets). Compression is successful if the resulting sequence is shorter than the original sequence (and the instructions for the decompression map). For a compression algorithm to be lossless, the compression map must form an injection from "plain" to "compressed" bit sequences. The pigeonhole principle prohibits a bijection between the collection of sequences of length N and any subset of the collection of sequences of length N−1. Therefore, it is not possible to produce a lossless algorithm that reduces the size of every possible input sequence.
Points of application in real compression theory
Real compression algorithm designers accept that streams of high information entropy cannot be compressed, and accordingly, include facilities for detecting and handling this condition. An obvious way of detection is applying a raw compression algorithm and testing if its output is smaller than its input. Sometimes, detection is made by heuristics; for example, a compression application may consider files whose names end in ".zip", ".arj" or ".lha" uncompressible without any more sophisticated detection. A common way of handling this situation is quoting input, or uncompressible parts of the input in the output, minimizing the compression overhead. For example, the zip data format specifies the 'compression method' of 'Stored' for input files that have been copied into the archive verbatim.
The Million Random Digit Challenge
Mark Nelson, in response to claims of "magic" compression algorithms appearing in comp.compression, has constructed a 415,241 byte binary file of highly entropic content, and issued a public challenge of $100 to anyone to write a program that, together with its input, would be smaller than his provided binary data yet be able to reconstitute it without error.
A similar challenge, with $5,000 as reward, was issued by Mike Goldman.
See also
Comparison of file archivers
Data compression
David A. Huffman
Entropy (information theory)
Grammar-based code
Information theory
Kolmogorov complexity
List of codecs
Lossless Transform Audio Compression (LTAC)
Lossy compression
Precompressor
Universal code (data compression)
Normal number
References
Further reading
(790 pages)
(488 pages)
External links
overview of
US patent #7,096,360, "[a]n "Frequency-Time Based Data Compression Method" supporting the compression, encryption, decompression, and decryption and persistence of many binary digits through frequencies where each frequency represents many bits."
Data compression
Lossless compression algorithms |
18529 | https://en.wikipedia.org/wiki/Lynx%20%28web%20browser%29 | Lynx (web browser) | Lynx is a customizable text-based web browser for use on cursor-addressable character cell terminals. , it is the oldest web browser still being maintained, having started in 1992.
History
Lynx was a product of the Distributed Computing Group within Academic Computing Services of the University of Kansas, and was initially developed in 1992 by a team of students and staff at the university (Lou Montulli, Michael Grobe and Charles Rezac) as a hypertext browser used solely to distribute campus information as part of a Campus-Wide Information Server and for browsing the Gopher space. Beta availability was announced to Usenet on 22 July 1992. In 1993, Montulli added an Internet interface and released a new version (2.0) of the browser.
the support of communication protocols in Lynx is implemented using a version of libwww, forked from the library's code base in 1996. The supported protocols include Gopher, HTTP, HTTPS, FTP, NNTP and WAIS. Support for NNTP was added to libwww from ongoing Lynx development in 1994. Support for HTTPS was added to Lynx's fork of libwww later, initially as patches due to concerns about encryption.
Garrett Blythe created DosLynx in April 1994 and later joined the Lynx effort as well. Foteos Macrides ported much of Lynx to VMS and maintained it for a time. In 1995, Lynx was released under the GNU General Public License, and is now maintained by a group of volunteers led by .
Features
Browsing in Lynx consists of highlighting the chosen link using cursor keys, or having all links on a page numbered and entering the chosen link's number. Current versions support SSL and many HTML features. Tables are formatted using spaces, while frames are identified by name and can be explored as if they were separate pages. Lynx is not inherently able to display various types of non-text content on the web, such as images and video, but it can launch external programs to handle it, such as an image viewer or a video player.
Unlike most web browsers, Lynx does not support JavaScript, which many websites require to work correctly.
The speed benefits of text-only browsing are most apparent when using low bandwidth internet connections, or older computer hardware that may be slow to render image-heavy content.
Privacy
Because Lynx does not support graphics, web bugs that track user information are not fetched, meaning that web pages can be read without the privacy concerns of graphic web browsers. However, Lynx does support HTTP cookies, which can also be used to track user information. Lynx therefore supports cookie whitelisting and blacklisting, or alternatively cookie support can be disabled permanently.
As with conventional browsers, Lynx also supports browsing histories and page caching, both of which can raise privacy concerns.
Configurability
Lynx accepts configuration options from either command-line options or configuration files. There are 142 command line options according to its help message. The template configuration file lynx.cfg lists 233 configurable features. There is some overlap between the two, although there are command-line options such as -restrict which are not matched in lynx.cfg. In addition to pre-set options by command-line and configuration file, Lynx's behavior can be adjusted at runtime using its options menu. Again, there is some overlap between the settings. Lynx implements many of these runtime optional features, optionally (controlled through a setting in the configuration file) allowing the choices to be saved to a separate writable configuration file. The reason for restricting the options which can be saved originated in a usage of Lynx which was more common in the mid-1990s, i.e., using Lynx itself as a front-end application to the Internet accessed by dial-in connections.
Accessibility
Because Lynx is a text-based browser, it can be used for internet access by visually impaired users on a refreshable braille display and is easily compatible with text-to-speech software. As Lynx substitutes images, frames and other non-textual content with the text from alt, name and title HTML attributes and allows hiding the user interface elements, the browser becomes specifically suitable for use with cost-effective general purpose screen reading software. A version of Lynx specifically enhanced for use with screen readers on Windows was developed at Indian Institute of Technology Madras.
Remote access
Lynx is also useful for accessing websites from a remotely connected system in which no graphical display is available. Despite its text-only nature and age, it can still be used to effectively browse much of the modern web, including performing interactive tasks such as editing Wikipedia.
Web design and robots
Since Lynx will take keystrokes from a text file, it is still very useful for automated data entry, web page navigation, and web scraping. Consequently, Lynx is used in some web crawlers. Web designers may use Lynx to determine the way in which search engines and web crawlers see the sites that they develop. Online services that provide Lynx's view of a given web page are available.
Lynx is also used to test websites' performance. As one can run the browser from different locations over remote access technologies like telnet and ssh, one can use Lynx to test the web site's connection performance from different geographical locations simultaneously. Another possible web design application of the browser is quick checking of the site's links.
Supported platforms
Lynx was originally designed for Unix-like operating systems, though it was ported to VMS soon after its public release and to other systems, including DOS, Microsoft Windows, Classic Mac OS and OS/2. It was included in the default OpenBSD installation from OpenBSD 2.3 (May 1998) to 5.5 (May 2014), being in the main tree prior to July 2014, subsequently being made available through the ports tree, and can also be found in the repositories of most Linux distributions, as well as in the Homebrew and Fink repositories for macOS. Ports to BeOS, MINIX, QNX, AmigaOS and OS/2 are also available.
The sources can be built on many platforms, e.g., mention is made of Google's Android operating system.
See also
Computer accessibility
Links (web browser)
ELinks
w3m
ModSecurity#Former Lynx browser blocking
Comparison of web browsers
Timeline of web browsers
Comparison of Usenet newsreaders
Notes
References
External links
1992 software
Cross-platform free software
Curses (programming library)
Free web browsers
Gopher clients
OS/2 web browsers
MacOS web browsers
Portable software
POSIX web browsers
RISC OS software
Software that uses S-Lang
Text-based web browsers
University of Kansas
Web browsers for AmigaOS
Web browsers for DOS
Free software programmed in C |
18562 | https://en.wikipedia.org/wiki/Leet | Leet | Leet (or "1337"), also known as eleet or leetspeak, is a system of modified spellings used primarily on the Internet. It often uses character replacements in ways that play on the similarity of their glyphs via reflection or other resemblance. Additionally, it modifies certain words based on a system of suffixes and alternate meanings. There are many dialects or linguistic varieties in different online communities.
The term "leet" is derived from the word elite, used as an adjective to describe skill or accomplishment, especially in the fields of online gaming and computer hacking. The leet lexicon includes spellings of the word as 1337 or leet.
History
Leet originated within bulletin board systems (BBS) in the 1980s, where having "elite" status on a BBS allowed a user access to file folders, games, and special chat rooms. The Cult of the Dead Cow hacker collective has been credited with the original coining of the term, in their text-files of that era. One theory is that it was developed to defeat text filters created by BBS or Internet Relay Chat system operators for message boards to discourage the discussion of forbidden topics, like cracking and hacking. Creative misspellings and ASCII-art-derived words were also a way to attempt to indicate one was knowledgeable about the culture of computer users.
Once reserved for hackers, crackers, and script kiddies, leet has since entered the mainstream. It is now also used to mock newbies, also known colloquially as n00bs, or newcomers, on websites, or in gaming communities. Some consider emoticons and ASCII art, like smiley faces, to be leet, while others maintain that leet consists of only symbolic word encryption. More obscure forms of leet, involving the use of symbol combinations and almost no letters or numbers, continue to be used for its original purpose of encrypted communication. It is also sometimes used as a scripting language. Variants of leet have been used for censorship purposes for many years; for instance "@$$" (ass) and "$#!+" (shit) are frequently seen to make a word appear censored to the untrained eye but obvious to a person familiar with leet. This enables coders and programmers especially to circumvent filters and speak about topics that would usually get banned. "Hacker" would end up as "H4x0r", for example.
Leet symbols, especially the number 1337, are Internet memes that have spilled over into popular culture. Signs that show the numbers "1337" are popular motifs for pictures and are shared widely across the Internet.
One of the earliest public examples of this substitution would be the album cover of Journey's Escape album which is stylized on the cover as "E5C4P3".
Orthography
One of the hallmarks of leet is its unique approach to orthography, using substitutions of other letters, or indeed of characters other than letters, to represent letters in a word. For more casual use of leet, the primary strategy is to use homoglyphs, symbols that closely resemble (to varying degrees) the letters for which they stand. The choice of symbol is not fixed: anything the reader can make sense of is valid. However, this practice is not extensively used in regular leet; more often it is seen in situations where the argot (i.e., secret language) characteristics of the system are required, either to exclude newbies or outsiders in general, i.e., anything that the average reader cannot make sense of is valid; a valid reader should themselves try to make sense, if deserving of the underlying message. Another use for leet orthographic substitutions is the creation of paraphrased passwords. Limitations imposed by websites on password length (usually no more than 36) and the characters permitted (e.g. alphanumeric and symbols) require less extensive forms when used in this application.
Some examples of leet include B1ff and n00b, a term for the stereotypical newbie; the l33t programming language; and the web-comics Megatokyo and Homestuck, which contain characters who speak variations of leet.
Morphology
Text rendered in leet is often characterized by distinctive, recurring forms.
-xor suffix
The meaning of this suffix is parallel with the English -er and -or suffixes (seen in hacker and lesser) in that it derives agent nouns from a verb stem. It is realized in two different forms: -xor and -zor, and , respectively. For example, the first may be seen in the word hax(x)or (H4x0r in leet) and the second in pwnzor . Additionally, this nominalization may also be inflected with all of the suffixes of regular English verbs. The letter 'o' is often replaced with the numeral 0.
-age suffix
Derivation of a noun from a verb stem is possible by attaching -age to the base form of any verb. Attested derivations are pwnage, skillage, and speakage. However, leet provides exceptions; the word leetage is acceptable, referring to actively being leet. These nouns are often used with a form of "to be" rather than "to have," e.g., "that was pwnage" rather than "he has pwnage". Either is a more emphatic way of expressing the simpler "he pwns," but the former implies that the person is embodying the trait rather than merely possessing it.
-ness suffix
Derivation of a noun from an adjective stem is done by attaching -ness to any adjective. This is entirely the same as the English form, except it is used much more often in Leet. Nouns such as lulzness and leetness are derivations using this suffix.
Words ending in -ed
When forming a past participle ending in -ed, the Leet user may replace the -e with an apostrophe, as was common in poetry of previous centuries, (e.g. "pwned" becomes "pwn'd"). Sometimes, the apostrophe is removed as well (e.g. "pwned" becomes "pwnd"). The word ending may also be substituted by -t (e.g. pwned becomes pwnt).
Use of the -& suffix
Words ending in -and, -anned, -ant, or a similar sound can sometimes be spelled with an ampersand (&) to express the ending sound (e.g. "This is the s&box", "I'm sorry, you've been b&", "&hill/&farm"). It is most commonly used with the word banned. An alternate form of "B&" is "B7", as the ampersand is with the "7" key on the standard US keyboard. It is often seen in the phrase "IBB7" (in before banned), which indicates that the poster believes that a previous poster will soon be banned from the site, channel, or board on which they are posting.
Grammar
Leet can be pronounced as a single syllable, , rhyming with eat, by way of apheresis of the initial vowel of "elite". It may also be pronounced as two syllables, . Like hacker slang, leet enjoys a looser grammar than standard English. The loose grammar, just like loose spelling, encodes some level of emphasis, ironic or otherwise. A reader must rely more on intuitive parsing of leet to determine the meaning of a sentence rather than the actual sentence structure. In particular, speakers of leet are fond of verbing nouns, turning verbs into nouns (and back again) as forms of emphasis, e.g. "Austin rocks" is weaker than "Austin roxxorz" (note spelling), which is weaker than "Au5t1N is t3h r0xx0rz" (note grammar), which is weaker than something like "0MFG D00D /\Ü571N 15 T3H l_l83Я 1337 Я0XX0ЯZ" (OMG, dude, Austin is the über-elite rocks-er!). In essence, all of these mean "Austin rocks," not necessarily the other options. Added words and misspellings add to the speaker's enjoyment. Leet, like hacker slang, employs analogy in construction of new words. For example, if haxored is the past tense of the verb "to hack" (hack → haxor → haxored), then winzored would be easily understood to be the past tense conjugation of "to win," even if the reader had not seen that particular word before.
Leet has its own colloquialisms, many of which originated as jokes based on common typing errors, habits of new computer users, or knowledge of cyberculture and history. Leet is not solely based upon one language or character set. Greek, Russian, and other languages have leet forms, and leet in one language may use characters from another where they are available. As such, while it may be referred to as a "cipher", a "dialect", or a "language", leet does not fit squarely into any of these categories. The term leet itself is often written 31337, or 1337, and many other variations. After the meaning of these became widely familiar, 10100111001 came to be used in its place, because it is the binary form of 1337 decimal, making it more of a puzzle to interpret. An increasingly common characteristic of leet is the changing of grammatical usage so as to be deliberately incorrect. The widespread popularity of deliberate misspelling is similar to the cult following of the "All your base are belong to us" phrase. Indeed, the online and computer communities have been international from their inception, so spellings and phrases typical of non-native speakers are quite common.
Vocabulary
Many words originally derived from leet have now become part of modern Internet slang, such as "pwned". The original driving forces of new vocabulary in leet were common misspellings and typing errors such as "teh" (generally considered lolspeak), and intentional misspellings, especially the "z" at the end of words ("skillz"). Another prominent example of a surviving leet expression is w00t, an exclamation of joy. w00t is sometimes used as a backronym for "We owned the other team."
New words (or corruptions thereof) may arise from a need to make one's username unique. As any given Internet service reaches more people, the number of names available to a given user is drastically reduced. While many users may wish to have the username "CatLover," for example, in many cases it is only possible for one user to have the moniker. As such, degradations of the name may evolve, such as "C@7L0vr." As the leet cipher is highly dynamic, there is a wider possibility for multiple users to share the "same" name, through combinations of spelling and transliterations.
Additionally, leet—the word itself—can be found in the screen-names and gamertags of many Internet and video games. Use of the term in such a manner announces a high level of skill, though such an announcement may be seen as baseless hubris.
Terminology and common misspellings
Warez (nominally ) is a plural shortening of "software", typically referring to cracked and redistributed software. Phreaking refers to the hacking of telephone systems and other non-Internet equipment. Teh originated as a typographical error of "the", and is sometimes spelled t3h. j00 takes the place of "you", originating from the affricate sound that occurs in place of the palatal approximant, , when you follows a word ending in an alveolar plosive consonant, such as or . Also, from German, is über, which means "over" or "above"; it usually appears as a prefix attached to adjectives, and is frequently written without the umlaut over the u.
Haxor and suxxor (suxorz)
Haxor, and derivations thereof, is leet for "hacker", and it is one of the most commonplace examples of the use of the -xor suffix. Suxxor (pronounced suck-zor) is a derogatory term which originated in warez culture and is currently used in multi-user environments such as multiplayer video games and instant messaging; it, like haxor, is one of the early leet words to use the -xor suffix. Suxxor is a modified version of "sucks" (the phrase "to suck"), and the meaning is the same as the English slang. Suxxor can be mistaken with Succer/Succker if used in the wrong context. Its negative definition essentially makes it the opposite of roxxor, and both can be used as a verb or a noun. The letters ck are often replaced with the Greek Χ (chi) in other words as well.
n00b
Within leet, the term n00b, and derivations thereof, is used extensively. The word means and derives from newbie (as in new and inexperienced or uninformed), and is used as a means of segregating them as less than the "elite," or even "normal," members of a group.
Owned and pwned
Owned and pwned (generally pronounced "poned") both refer to the domination of a player in a video game or argument (rather than just a win), or the successful hacking of a website or computer. It is a slang term derived from the verb own, meaning to appropriate or to conquer to gain ownership. As is a common characteristic of leet, the terms have also been adapted into noun and adjective forms, ownage and pwnage, which can refer to the situation of pwning or to the superiority of its subject (e.g., "He is a very good player. He is pwnage.").
The term was created accidentally by the misspelling of "own" in video game design due to the keyboard proximity of the "O" and "P" keys. It implies domination or humiliation of a rival, used primarily in the Internet-based video game culture to taunt an opponent who has just been soundly defeated (e.g., "You just got pwned!"). In 2015 Scrabble added pwn to their Official Scrabble Words list.
Pr0n
Pr0n is slang for pornography. This is a deliberately inaccurate spelling/pronunciation for porn, where a zero is often used to replace the letter O. It is sometimes used in legitimate communications (such as email discussion groups, Usenet, chat rooms, and Internet web pages) to circumvent language and content filters, which may reject messages as offensive or spam. The word also helps prevent search engines from associating commercial sites with pornography, which might result in unwelcome traffic. Pr0n is also sometimes spelled backwards (n0rp) to further obscure the meaning to potentially uninformed readers. It can also refer to ASCII art depicting pornographic images, or to photos of the internals of consumer and industrial hardware. Prawn, a spoof of the misspelling, has started to come into use, as well; in Grand Theft Auto: Vice City, a pornographer films his movies on "Prawn Island". Conversely, in the RPG Kingdom of Loathing, prawn, referring to a kind of crustacean, is spelled pr0n, leading to the creation of food items such as "pr0n chow mein".
Also see porm.
See also
Calculator spelling
Faux Cyrillic
Geek Code
Jargon File, a glossary and usage dictionary of computer programmer slang
Padonkaffsky jargon
Notes
Footnotes
References
Further reading
External links
Leet Translator
Alphabets
Encodings
In-jokes
Internet culture
Internet memes
Internet slang
Latin-script representations
Nerd culture
Nonstandard spelling
Obfuscation
Social networking services
1990s slang |
18568 | https://en.wikipedia.org/wiki/List%20of%20algorithms | List of algorithms | The following is a list of algorithms along with one-line descriptions for each.
Automated planning
Combinatorial algorithms
General combinatorial algorithms
Brent's algorithm: finds a cycle in function value iterations using only two iterators
Floyd's cycle-finding algorithm: finds a cycle in function value iterations
Gale–Shapley algorithm: solves the stable marriage problem
Pseudorandom number generators (uniformly distributed—see also List of pseudorandom number generators for other PRNGs with varying degrees of convergence and varying statistical quality):
ACORN generator
Blum Blum Shub
Lagged Fibonacci generator
Linear congruential generator
Mersenne Twister
Graph algorithms
Coloring algorithm: Graph coloring algorithm.
Hopcroft–Karp algorithm: convert a bipartite graph to a maximum cardinality matching
Hungarian algorithm: algorithm for finding a perfect matching
Prüfer coding: conversion between a labeled tree and its Prüfer sequence
Tarjan's off-line lowest common ancestors algorithm: computes lowest common ancestors for pairs of nodes in a tree
Topological sort: finds linear order of nodes (e.g. jobs) based on their dependencies.
Graph drawing
Force-based algorithms (also known as force-directed algorithms or spring-based algorithm)
Spectral layout
Network theory
Network analysis
Link analysis
Girvan–Newman algorithm: detect communities in complex systems
Web link analysis
Hyperlink-Induced Topic Search (HITS) (also known as Hubs and authorities)
PageRank
TrustRank
Flow networks
Dinic's algorithm: is a strongly polynomial algorithm for computing the maximum flow in a flow network.
Edmonds–Karp algorithm: implementation of Ford–Fulkerson
Ford–Fulkerson algorithm: computes the maximum flow in a graph
Karger's algorithm: a Monte Carlo method to compute the minimum cut of a connected graph
Push–relabel algorithm: computes a maximum flow in a graph
Routing for graphs
Edmonds' algorithm (also known as Chu–Liu/Edmonds' algorithm): find maximum or minimum branchings
Euclidean minimum spanning tree: algorithms for computing the minimum spanning tree of a set of points in the plane
Longest path problem: find a simple path of maximum length in a given graph
Minimum spanning tree
Borůvka's algorithm
Kruskal's algorithm
Prim's algorithm
Reverse-delete algorithm
Nonblocking minimal spanning switch say, for a telephone exchange
Shortest path problem
Bellman–Ford algorithm: computes shortest paths in a weighted graph (where some of the edge weights may be negative)
Dijkstra's algorithm: computes shortest paths in a graph with non-negative edge weights
Floyd–Warshall algorithm: solves the all pairs shortest path problem in a weighted, directed graph
Johnson's algorithm: All pairs shortest path algorithm in sparse weighted directed graph
Transitive closure problem: find the transitive closure of a given binary relation
Traveling salesman problem
Christofides algorithm
Nearest neighbour algorithm
Warnsdorff's rule: A heuristic method for solving the Knight's tour problem.
Graph search
A*: special case of best-first search that uses heuristics to improve speed
B*: a best-first graph search algorithm that finds the least-cost path from a given initial node to any goal node (out of one or more possible goals)
Backtracking: abandons partial solutions when they are found not to satisfy a complete solution
Beam search: is a heuristic search algorithm that is an optimization of best-first search that reduces its memory requirement
Beam stack search: integrates backtracking with beam search
Best-first search: traverses a graph in the order of likely importance using a priority queue
Bidirectional search: find the shortest path from an initial vertex to a goal vertex in a directed graph
Breadth-first search: traverses a graph level by level
Brute-force search: An exhaustive and reliable search method, but computationally inefficient in many applications.
D*: an incremental heuristic search algorithm
Depth-first search: traverses a graph branch by branch
Dijkstra's algorithm: A special case of A* for which no heuristic function is used
General Problem Solver: a seminal theorem-proving algorithm intended to work as a universal problem solver machine.
Iterative deepening depth-first search (IDDFS): a state space search strategy
Jump point search: An optimization to A* which may reduce computation time by an order of magnitude using further heuristics.
Lexicographic breadth-first search (also known as Lex-BFS): a linear time algorithm for ordering the vertices of a graph
Uniform-cost search: a tree search that finds the lowest-cost route where costs vary
SSS*: state space search traversing a game tree in a best-first fashion similar to that of the A* search algorithm
F*: Special algorithm to merge the two arrays
Subgraphs
Cliques
Bron–Kerbosch algorithm: a technique for finding maximal cliques in an undirected graph
MaxCliqueDyn maximum clique algorithm: find a maximum clique in an undirected graph
Strongly connected components
Path-based strong component algorithm
Kosaraju's algorithm
Tarjan's strongly connected components algorithm
Subgraph isomorphism problem
Sequence algorithms
Approximate sequence matching
Bitap algorithm: fuzzy algorithm that determines if strings are approximately equal.
Phonetic algorithms
Daitch–Mokotoff Soundex: a Soundex refinement which allows matching of Slavic and Germanic surnames
Double Metaphone: an improvement on Metaphone
Match rating approach: a phonetic algorithm developed by Western Airlines
Metaphone: an algorithm for indexing words by their sound, when pronounced in English
NYSIIS: phonetic algorithm, improves on Soundex
Soundex: a phonetic algorithm for indexing names by sound, as pronounced in English
String metrics: computes a similarity or dissimilarity (distance) score between two pairs of text strings
Damerau–Levenshtein distance: computes a distance measure between two strings, improves on Levenshtein distance
Dice's coefficient (also known as the Dice coefficient): a similarity measure related to the Jaccard index
Hamming distance: sum number of positions which are different
Jaro–Winkler distance: is a measure of similarity between two strings
Levenshtein edit distance: computes a metric for the amount of difference between two sequences
Trigram search: search for text when the exact syntax or spelling of the target object is not precisely known
Selection algorithms
Quickselect
Introselect
Sequence search
Linear search: locates an item in an unsorted sequence
Selection algorithm: finds the kth largest item in a sequence
Ternary search: a technique for finding the minimum or maximum of a function that is either strictly increasing and then strictly decreasing or vice versa
Sorted lists
Binary search algorithm: locates an item in a sorted sequence
Fibonacci search technique: search a sorted sequence using a divide and conquer algorithm that narrows down possible locations with the aid of Fibonacci numbers
Jump search (or block search): linear search on a smaller subset of the sequence
Predictive search: binary-like search which factors in magnitude of search term versus the high and low values in the search. Sometimes called dictionary search or interpolated search.
Uniform binary search: an optimization of the classic binary search algorithm
Sequence merging
Simple merge algorithm
k-way merge algorithm
Union (merge, with elements on the output not repeated)
Sequence permutations
Fisher–Yates shuffle (also known as the Knuth shuffle): randomly shuffle a finite set
Schensted algorithm: constructs a pair of Young tableaux from a permutation
Steinhaus–Johnson–Trotter algorithm (also known as the Johnson–Trotter algorithm): generates permutations by transposing elements
Heap's permutation generation algorithm: interchange elements to generate next permutation
Sequence combinations
Sequence alignment
Dynamic time warping: measure similarity between two sequences which may vary in time or speed
Hirschberg's algorithm: finds the least cost sequence alignment between two sequences, as measured by their Levenshtein distance
Needleman–Wunsch algorithm: find global alignment between two sequences
Smith–Waterman algorithm: find local sequence alignment
Sequence sorting
Exchange sorts
Bubble sort: for each pair of indices, swap the items if out of order
Cocktail shaker sort or bidirectional bubble sort, a bubble sort traversing the list alternately from front to back and back to front
Comb sort
Gnome sort
Odd–even sort
Quicksort: divide list into two, with all items on the first list coming before all items on the second list.; then sort the two lists. Often the method of choice
Humorous or ineffective
Bogosort
Stooge sort
Hybrid
Flashsort
Introsort: begin with quicksort and switch to heapsort when the recursion depth exceeds a certain level
Timsort: adaptative algorithm derived from merge sort and insertion sort. Used in Python 2.3 and up, and Java SE 7.
Insertion sorts
Insertion sort: determine where the current item belongs in the list of sorted ones, and insert it there
Library sort
Patience sorting
Shell sort: an attempt to improve insertion sort
Tree sort (binary tree sort): build binary tree, then traverse it to create sorted list
Cycle sort: in-place with theoretically optimal number of writes
Merge sorts
Merge sort: sort the first and second half of the list separately, then merge the sorted lists
Slowsort
Strand sort
Non-comparison sorts
Bead sort
Bucket sort
Burstsort: build a compact, cache efficient burst trie and then traverse it to create sorted output
Counting sort
Pigeonhole sort
Postman sort: variant of Bucket sort which takes advantage of hierarchical structure
Radix sort: sorts strings letter by letter
Selection sorts
Heapsort: convert the list into a heap, keep removing the largest element from the heap and adding it to the end of the list
Selection sort: pick the smallest of the remaining elements, add it to the end of the sorted list
Smoothsort
Other
Bitonic sorter
Pancake sorting
Spaghetti sort
Topological sort
Unknown class
Samplesort
Subsequences
Kadane's algorithm: finds maximum sub-array of any size
Longest common subsequence problem: Find the longest subsequence common to all sequences in a set of sequences
Longest increasing subsequence problem: Find the longest increasing subsequence of a given sequence
Shortest common supersequence problem: Find the shortest supersequence that contains two or more sequences as subsequences
Substrings
Longest common substring problem: find the longest string (or strings) that is a substring (or are substrings) of two or more strings
Substring search
Aho–Corasick string matching algorithm: trie based algorithm for finding all substring matches to any of a finite set of strings
Boyer–Moore string-search algorithm: amortized linear (sublinear in most times) algorithm for substring search
Boyer–Moore–Horspool algorithm: Simplification of Boyer–Moore
Knuth–Morris–Pratt algorithm: substring search which bypasses reexamination of matched characters
Rabin–Karp string search algorithm: searches multiple patterns efficiently
Zhu–Takaoka string matching algorithm: a variant of Boyer–Moore
Ukkonen's algorithm: a linear-time, online algorithm for constructing suffix trees
Matching wildcards
Rich Salz' wildmat: a widely used open-source recursive algorithm
Krauss matching wildcards algorithm: an open-source non-recursive algorithm
Computational mathematics
Abstract algebra
Chien search: a recursive algorithm for determining roots of polynomials defined over a finite field
Schreier–Sims algorithm: computing a base and strong generating set (BSGS) of a permutation group
Todd–Coxeter algorithm: Procedure for generating cosets.
Computer algebra
Buchberger's algorithm: finds a Gröbner basis
Cantor–Zassenhaus algorithm: factor polynomials over finite fields
Faugère F4 algorithm: finds a Gröbner basis (also mentions the F5 algorithm)
Gosper's algorithm: find sums of hypergeometric terms that are themselves hypergeometric terms
Knuth–Bendix completion algorithm: for rewriting rule systems
Multivariate division algorithm: for polynomials in several indeterminates
Pollard's kangaroo algorithm (also known as Pollard's lambda algorithm ): an algorithm for solving the discrete logarithm problem
Polynomial long division: an algorithm for dividing a polynomial by another polynomial of the same or lower degree
Risch algorithm: an algorithm for the calculus operation of indefinite integration (i.e. finding antiderivatives)
Geometry
Closest pair problem: find the pair of points (from a set of points) with the smallest distance between them
Collision detection algorithms: check for the collision or intersection of two given solids
Cone algorithm: identify surface points
Convex hull algorithms: determining the convex hull of a set of points
Graham scan
Quickhull
Gift wrapping algorithm or Jarvis march
Chan's algorithm
Kirkpatrick–Seidel algorithm
Euclidean distance transform: computes the distance between every point in a grid and a discrete collection of points.
Geometric hashing: a method for efficiently finding two-dimensional objects represented by discrete points that have undergone an affine transformation
Gilbert–Johnson–Keerthi distance algorithm: determining the smallest distance between two convex shapes.
Jump-and-Walk algorithm: an algorithm for point location in triangulations
Laplacian smoothing: an algorithm to smooth a polygonal mesh
Line segment intersection: finding whether lines intersect, usually with a sweep line algorithm
Bentley–Ottmann algorithm
Shamos–Hoey algorithm
Minimum bounding box algorithms: find the oriented minimum bounding box enclosing a set of points
Nearest neighbor search: find the nearest point or points to a query point
Point in polygon algorithms: tests whether a given point lies within a given polygon
Point set registration algorithms: finds the transformation between two point sets to optimally align them.
Rotating calipers: determine all antipodal pairs of points and vertices on a convex polygon or convex hull.
Shoelace algorithm: determine the area of a polygon whose vertices are described by ordered pairs in the plane
Triangulation
Delaunay triangulation
Ruppert's algorithm (also known as Delaunay refinement): create quality Delaunay triangulations
Chew's second algorithm: create quality constrained Delaunay triangulations
Marching triangles: reconstruct two-dimensional surface geometry from an unstructured point cloud
Polygon triangulation algorithms: decompose a polygon into a set of triangles
Voronoi diagrams, geometric dual of Delaunay triangulation
Bowyer–Watson algorithm: create voronoi diagram in any number of dimensions
Fortune's Algorithm: create voronoi diagram
Quasitriangulation
Number theoretic algorithms
Binary GCD algorithm: Efficient way of calculating GCD.
Booth's multiplication algorithm
Chakravala method: a cyclic algorithm to solve indeterminate quadratic equations, including Pell's equation
Discrete logarithm:
Baby-step giant-step
Index calculus algorithm
Pollard's rho algorithm for logarithms
Pohlig–Hellman algorithm
Euclidean algorithm: computes the greatest common divisor
Extended Euclidean algorithm: Also solves the equation ax + by = c.
Integer factorization: breaking an integer into its prime factors
Congruence of squares
Dixon's algorithm
Fermat's factorization method
General number field sieve
Lenstra elliptic curve factorization
Pollard's p − 1 algorithm
Pollard's rho algorithm
prime factorization algorithm
Quadratic sieve
Shor's algorithm
Special number field sieve
Trial division
Multiplication algorithms: fast multiplication of two numbers
Karatsuba algorithm
Schönhage–Strassen algorithm
Toom–Cook multiplication
Modular square root: computing square roots modulo a prime number
Tonelli–Shanks algorithm
Cipolla's algorithm
Berlekamp's root finding algorithm
Odlyzko–Schönhage algorithm: calculates nontrivial zeroes of the Riemann zeta function
Lenstra–Lenstra–Lovász algorithm (also known as LLL algorithm): find a short, nearly orthogonal lattice basis in polynomial time
Primality tests: determining whether a given number is prime
AKS primality test
Baillie–PSW primality test
Fermat primality test
Lucas primality test
Miller–Rabin primality test
Sieve of Atkin
Sieve of Eratosthenes
Sieve of Sundaram
Numerical algorithms
Differential equation solving
Euler method
Backward Euler method
Trapezoidal rule (differential equations)
Linear multistep methods
Runge–Kutta methods
Euler integration
Multigrid methods (MG methods), a group of algorithms for solving differential equations using a hierarchy of discretizations
Partial differential equation:
Finite difference method
Crank–Nicolson method for diffusion equations
Lax–Wendroff for wave equations
Verlet integration (): integrate Newton's equations of motion
Elementary and special functions
Computation of π:
Borwein's algorithm: an algorithm to calculate the value of 1/π
Gauss–Legendre algorithm: computes the digits of pi
Chudnovsky algorithm: A fast method for calculating the digits of π
Bailey–Borwein–Plouffe formula: (BBP formula) a spigot algorithm for the computation of the nth binary digit of π
Division algorithms: for computing quotient and/or remainder of two numbers
Long division
Restoring division
Non-restoring division
SRT division
Newton–Raphson division: uses Newton's method to find the reciprocal of D, and multiply that reciprocal by N to find the final quotient Q.
Goldschmidt division
Hyperbolic and Trigonometric Functions:
BKM algorithm: computes elementary functions using a table of logarithms
CORDIC: computes hyperbolic and trigonometric functions using a table of arctangents
Exponentiation:
Addition-chain exponentiation: exponentiation by positive integer powers that requires a minimal number of multiplications
Exponentiating by squaring: an algorithm used for the fast computation of large integer powers of a number
Montgomery reduction: an algorithm that allows modular arithmetic to be performed efficiently when the modulus is large
Multiplication algorithms: fast multiplication of two numbers
Booth's multiplication algorithm: a multiplication algorithm that multiplies two signed binary numbers in two's complement notation
Fürer's algorithm: an integer multiplication algorithm for very large numbers possessing a very low asymptotic complexity
Karatsuba algorithm: an efficient procedure for multiplying large numbers
Schönhage–Strassen algorithm: an asymptotically fast multiplication algorithm for large integers
Toom–Cook multiplication: (Toom3) a multiplication algorithm for large integers
Multiplicative inverse Algorithms: for computing a number's multiplicative inverse (reciprocal).
Newton's method
Rounding functions: the classic ways to round numbers
Spigot algorithm: A way to compute the value of a mathematical constant without knowing preceding digits
Square and Nth root of a number:
Alpha max plus beta min algorithm: an approximation of the square-root of the sum of two squares
Methods of computing square roots
nth root algorithm
Shifting nth-root algorithm: digit by digit root extraction
Summation:
Binary splitting: a divide and conquer technique which speeds up the numerical evaluation of many types of series with rational terms
Kahan summation algorithm: a more accurate method of summing floating-point numbers
Unrestricted algorithm
Geometric
Filtered back-projection: efficiently computes the inverse 2-dimensional Radon transform.
Level set method (LSM): a numerical technique for tracking interfaces and shapes
Interpolation and extrapolation
Birkhoff interpolation: an extension of polynomial interpolation
Cubic interpolation
Hermite interpolation
Lagrange interpolation: interpolation using Lagrange polynomials
Linear interpolation: a method of curve fitting using linear polynomials
Monotone cubic interpolation: a variant of cubic interpolation that preserves monotonicity of the data set being interpolated.
Multivariate interpolation
Bicubic interpolation, a generalization of cubic interpolation to two dimensions
Bilinear interpolation: an extension of linear interpolation for interpolating functions of two variables on a regular grid
Lanczos resampling ("Lanzosh"): a multivariate interpolation method used to compute new values for any digitally sampled data
Nearest-neighbor interpolation
Tricubic interpolation, a generalization of cubic interpolation to three dimensions
Pareto interpolation: a method of estimating the median and other properties of a population that follows a Pareto distribution.
Polynomial interpolation
Neville's algorithm
Spline interpolation: Reduces error with Runge's phenomenon.
De Boor algorithm: B-splines
De Casteljau's algorithm: Bézier curves
Trigonometric interpolation
Linear algebra
Eigenvalue algorithms
Arnoldi iteration
Inverse iteration
Jacobi method
Lanczos iteration
Power iteration
QR algorithm
Rayleigh quotient iteration
Gram–Schmidt process: orthogonalizes a set of vectors
Matrix multiplication algorithms
Cannon's algorithm: a distributed algorithm for matrix multiplication especially suitable for computers laid out in an N × N mesh
Coppersmith–Winograd algorithm: square matrix multiplication
Freivalds' algorithm: a randomized algorithm used to verify matrix multiplication
Strassen algorithm: faster matrix multiplication
Solving systems of linear equations
Biconjugate gradient method: solves systems of linear equations
Conjugate gradient: an algorithm for the numerical solution of particular systems of linear equations
Gaussian elimination
Gauss–Jordan elimination: solves systems of linear equations
Gauss–Seidel method: solves systems of linear equations iteratively
Levinson recursion: solves equation involving a Toeplitz matrix
Stone's method: also known as the strongly implicit procedure or SIP, is an algorithm for solving a sparse linear system of equations
Successive over-relaxation (SOR): method used to speed up convergence of the Gauss–Seidel method
Tridiagonal matrix algorithm (Thomas algorithm): solves systems of tridiagonal equations
Sparse matrix algorithms
Cuthill–McKee algorithm: reduce the bandwidth of a symmetric sparse matrix
Minimum degree algorithm: permute the rows and columns of a symmetric sparse matrix before applying the Cholesky decomposition
Symbolic Cholesky decomposition: Efficient way of storing sparse matrix
Monte Carlo
Gibbs sampling: generates a sequence of samples from the joint probability distribution of two or more random variables
Hybrid Monte Carlo: generates a sequence of samples using Hamiltonian weighted Markov chain Monte Carlo, from a probability distribution which is difficult to sample directly.
Metropolis–Hastings algorithm: used to generate a sequence of samples from the probability distribution of one or more variables
Wang and Landau algorithm: an extension of Metropolis–Hastings algorithm sampling
Numerical integration
MISER algorithm: Monte Carlo simulation, numerical integration
Root finding
Bisection method
False position method: approximates roots of a function
ITP method: minmax optimal and superlinar convergence simultaneously
Newton's method: finds zeros of functions with calculus
Halley's method: uses first and second derivatives
Secant method: 2-point, 1-sided
False position method and Illinois method: 2-point, bracketing
Ridder's method: 3-point, exponential scaling
Muller's method: 3-point, quadratic interpolation
Optimization algorithms
Alpha–beta pruning: search to reduce number of nodes in minimax algorithm
Branch and bound
Bruss algorithm: see odds algorithm
Chain matrix multiplication
Combinatorial optimization: optimization problems where the set of feasible solutions is discrete
Greedy randomized adaptive search procedure (GRASP): successive constructions of a greedy randomized solution and subsequent iterative improvements of it through a local search
Hungarian method: a combinatorial optimization algorithm which solves the assignment problem in polynomial time
Constraint satisfaction
General algorithms for the constraint satisfaction
AC-3 algorithm
Difference map algorithm
Min conflicts algorithm
Chaff algorithm: an algorithm for solving instances of the boolean satisfiability problem
Davis–Putnam algorithm: check the validity of a first-order logic formula
Davis–Putnam–Logemann–Loveland algorithm (DPLL): an algorithm for deciding the satisfiability of propositional logic formula in conjunctive normal form, i.e. for solving the CNF-SAT problem
Exact cover problem
Algorithm X: a nondeterministic algorithm
Dancing Links: an efficient implementation of Algorithm X
Cross-entropy method: a general Monte Carlo approach to combinatorial and continuous multi-extremal optimization and importance sampling
Differential evolution
Dynamic Programming: problems exhibiting the properties of overlapping subproblems and optimal substructure
Ellipsoid method: is an algorithm for solving convex optimization problems
Evolutionary computation: optimization inspired by biological mechanisms of evolution
Evolution strategy
Gene expression programming
Genetic algorithms
Fitness proportionate selection – also known as roulette-wheel selection
Stochastic universal sampling
Truncation selection
Tournament selection
Memetic algorithm
Swarm intelligence
Ant colony optimization
Bees algorithm: a search algorithm which mimics the food foraging behavior of swarms of honey bees
Particle swarm
Frank-Wolfe algorithm: an iterative first-order optimization algorithm for constrained convex optimization
Golden-section search: an algorithm for finding the maximum of a real function
Gradient descent
Grid Search
Harmony search (HS): a metaheuristic algorithm mimicking the improvisation process of musicians
Interior point method
Linear programming
Benson's algorithm: an algorithm for solving linear vector optimization problems
Dantzig–Wolfe decomposition: an algorithm for solving linear programming problems with special structure
Delayed column generation
Integer linear programming: solve linear programming problems where some or all the unknowns are restricted to integer values
Branch and cut
Cutting-plane method
Karmarkar's algorithm: The first reasonably efficient algorithm that solves the linear programming problem in polynomial time.
Simplex algorithm: An algorithm for solving linear programming problems
Line search
Local search: a metaheuristic for solving computationally hard optimization problems
Random-restart hill climbing
Tabu search
Minimax used in game programming
Nearest neighbor search (NNS): find closest points in a metric space
Best Bin First: find an approximate solution to the nearest neighbor search problem in very-high-dimensional spaces
Newton's method in optimization
Nonlinear optimization
BFGS method: A nonlinear optimization algorithm
Gauss–Newton algorithm: An algorithm for solving nonlinear least squares problems.
Levenberg–Marquardt algorithm: An algorithm for solving nonlinear least squares problems.
Nelder–Mead method (downhill simplex method): A nonlinear optimization algorithm
Odds algorithm (Bruss algorithm): Finds the optimal strategy to predict a last specific event in a random sequence event
Random Search
Simulated annealing
Stochastic tunneling
Subset sum algorithm
Computational science
Astronomy
Doomsday algorithm: day of the week
Zeller's congruence is an algorithm to calculate the day of the week for any Julian or Gregorian calendar date
various Easter algorithms are used to calculate the day of Easter
Bioinformatics
Basic Local Alignment Search Tool also known as BLAST: an algorithm for comparing primary biological sequence information
Kabsch algorithm: calculate the optimal alignment of two sets of points in order to compute the root mean squared deviation between two protein structures.
Velvet: a set of algorithms manipulating de Bruijn graphs for genomic sequence assembly
Sorting by signed reversals: an algorithm for understanding genomic evolution.
Maximum parsimony (phylogenetics): an algorithm for finding the simplest phylogenetic tree to explain a given character matrix.
UPGMA: a distance-based phylogenetic tree construction algorithm.
Geoscience
Vincenty's formulae: a fast algorithm to calculate the distance between two latitude/longitude points on an ellipsoid
Geohash: a public domain algorithm that encodes a decimal latitude/longitude pair as a hash string
Linguistics
Lesk algorithm: word sense disambiguation
Stemming algorithm: a method of reducing words to their stem, base, or root form
Sukhotin's algorithm: a statistical classification algorithm for classifying characters in a text as vowels or consonants
Medicine
ESC algorithm for the diagnosis of heart failure
Manning Criteria for irritable bowel syndrome
Pulmonary embolism diagnostic algorithms
Texas Medication Algorithm Project
Physics
Constraint algorithm: a class of algorithms for satisfying constraints for bodies that obey Newton's equations of motion
Demon algorithm: a Monte Carlo method for efficiently sampling members of a microcanonical ensemble with a given energy
Featherstone's algorithm: computes the effects of forces applied to a structure of joints and links
Ground state approximation
Variational method
Ritz method
n-body problems
Barnes–Hut simulation: Solves the n-body problem in an approximate way that has the order instead of as in a direct-sum simulation.
Fast multipole method (FMM): speeds up the calculation of long-ranged forces
Rainflow-counting algorithm: Reduces a complex stress history to a count of elementary stress-reversals for use in fatigue analysis
Sweep and prune: a broad phase algorithm used during collision detection to limit the number of pairs of solids that need to be checked for collision
VEGAS algorithm: a method for reducing error in Monte Carlo simulations
Glauber dynamics: a method for simulating the Ising Model on a computer
Statistics
Algorithms for calculating variance: avoiding instability and numerical overflow
Approximate counting algorithm: Allows counting large number of events in a small register
Bayesian statistics
Nested sampling algorithm: a computational approach to the problem of comparing models in Bayesian statistics
Clustering Algorithms
Average-linkage clustering: a simple agglomerative clustering algorithm
Canopy clustering algorithm: an unsupervised pre-clustering algorithm related to the K-means algorithm
Complete-linkage clustering: a simple agglomerative clustering algorithm
DBSCAN: a density based clustering algorithm
Expectation-maximization algorithm
Fuzzy clustering: a class of clustering algorithms where each point has a degree of belonging to clusters
Fuzzy c-means
FLAME clustering (Fuzzy clustering by Local Approximation of MEmberships): define clusters in the dense parts of a dataset and perform cluster assignment solely based on the neighborhood relationships among objects
KHOPCA clustering algorithm: a local clustering algorithm, which produces hierarchical multi-hop clusters in static and mobile environments.
k-means clustering: cluster objects based on attributes into partitions
k-means++: a variation of this, using modified random seeds
k-medoids: similar to k-means, but chooses datapoints or medoids as centers
Linde–Buzo–Gray algorithm: a vector quantization algorithm to derive a good codebook
Lloyd's algorithm (Voronoi iteration or relaxation): group data points into a given number of categories, a popular algorithm for k-means clustering
OPTICS: a density based clustering algorithm with a visual evaluation method
Single-linkage clustering: a simple agglomerative clustering algorithm
SUBCLU: a subspace clustering algorithm
Ward's method: an agglomerative clustering algorithm, extended to more general Lance–Williams algorithms
WACA clustering algorithm: a local clustering algorithm with potentially multi-hop structures; for dynamic networks
Estimation Theory
Expectation-maximization algorithm A class of related algorithms for finding maximum likelihood estimates of parameters in probabilistic models
Ordered subset expectation maximization (OSEM): used in medical imaging for positron emission tomography, single-photon emission computed tomography and X-ray computed tomography.
Odds algorithm (Bruss algorithm) Optimal online search for distinguished value in sequential random input
Kalman filter: estimate the state of a linear dynamic system from a series of noisy measurements
False nearest neighbor algorithm (FNN) estimates fractal dimension
Hidden Markov model
Baum–Welch algorithm: computes maximum likelihood estimates and posterior mode estimates for the parameters of a hidden Markov model
Forward-backward algorithm: a dynamic programming algorithm for computing the probability of a particular observation sequence
Viterbi algorithm: find the most likely sequence of hidden states in a hidden Markov model
Partial least squares regression: finds a linear model describing some predicted variables in terms of other observable variables
Queuing theory
Buzen's algorithm: an algorithm for calculating the normalization constant G(K) in the Gordon–Newell theorem
RANSAC (an abbreviation for "RANdom SAmple Consensus"): an iterative method to estimate parameters of a mathematical model from a set of observed data which contains outliers
Scoring algorithm: is a form of Newton's method used to solve maximum likelihood equations numerically
Yamartino method: calculate an approximation to the standard deviation σθ of wind direction θ during a single pass through the incoming data
Ziggurat algorithm: generates random numbers from a non-uniform distribution
Computer science
Computer architecture
Tomasulo algorithm: allows sequential instructions that would normally be stalled due to certain dependencies to execute non-sequentially
Computer graphics
Clipping
Line clipping
Cohen–Sutherland
Cyrus–Beck
Fast-clipping
Liang–Barsky
Nicholl–Lee–Nicholl
Polygon clipping
Sutherland–Hodgman
Vatti
Weiler–Atherton
Contour lines and Isosurfaces
Marching cubes: extract a polygonal mesh of an isosurface from a three-dimensional scalar field (sometimes called voxels)
Marching squares: generates contour lines for a two-dimensional scalar field
Marching tetrahedrons: an alternative to Marching cubes
Discrete Green's Theorem: is an algorithm for computing double integral over a generalized rectangular domain in constant time. It is a natural extension to the summed area table algorithm
Flood fill: fills a connected region of a multi-dimensional array with a specified symbol
Global illumination algorithms: Considers direct illumination and reflection from other objects.
Ambient occlusion
Beam tracing
Cone tracing
Image-based lighting
Metropolis light transport
Path tracing
Photon mapping
Radiosity
Ray tracing
Hidden-surface removal or Visual surface determination
Newell's algorithm: eliminate polygon cycles in the depth sorting required in hidden-surface removal
Painter's algorithm: detects visible parts of a 3-dimensional scenery
Scanline rendering: constructs an image by moving an imaginary line over the image
Warnock algorithm
Line Drawing: graphical algorithm for approximating a line segment on discrete graphical media.
Bresenham's line algorithm: plots points of a 2-dimensional array to form a straight line between 2 specified points (uses decision variables)
DDA line algorithm: plots points of a 2-dimensional array to form a straight line between 2 specified points (uses floating-point math)
Xiaolin Wu's line algorithm: algorithm for line antialiasing.
Midpoint circle algorithm: an algorithm used to determine the points needed for drawing a circle
Ramer–Douglas–Peucker algorithm: Given a 'curve' composed of line segments to find a curve not too dissimilar but that has fewer points
Shading
Gouraud shading: an algorithm to simulate the differing effects of light and colour across the surface of an object in 3D computer graphics
Phong shading: an algorithm to interpolate surface normal-vectors for surface shading in 3D computer graphics
Slerp (spherical linear interpolation): quaternion interpolation for the purpose of animating 3D rotation
Summed area table (also known as an integral image): an algorithm for computing the sum of values in a rectangular subset of a grid in constant time
Cryptography
Asymmetric (public key) encryption:
ElGamal
Elliptic curve cryptography
MAE1
NTRUEncrypt
RSA
Digital signatures (asymmetric authentication):
DSA, and its variants:
ECDSA and Deterministic ECDSA
EdDSA (Ed25519)
RSA
Cryptographic hash functions (see also the section on message authentication codes):
BLAKE
MD5 – Note that there is now a method of generating collisions for MD5
RIPEMD-160
SHA-1 – Note that there is now a method of generating collisions for SHA-1
SHA-2 (SHA-224, SHA-256, SHA-384, SHA-512)
SHA-3 (SHA3-224, SHA3-256, SHA3-384, SHA3-512, SHAKE128, SHAKE256)
Tiger (TTH), usually used in Tiger tree hashes
WHIRLPOOL
Cryptographically secure pseudo-random number generators
Blum Blum Shub – based on the hardness of factorization
Fortuna, intended as an improvement on Yarrow algorithm
Linear-feedback shift register (note: many LFSR-based algorithms are weak or have been broken)
Yarrow algorithm
Key exchange
Diffie–Hellman key exchange
Elliptic-curve Diffie–Hellman (ECDH)
Key derivation functions, often used for password hashing and key stretching
bcrypt
PBKDF2
scrypt
Argon2
Message authentication codes (symmetric authentication algorithms, which take a key as a parameter):
HMAC: keyed-hash message authentication
Poly1305
SipHash
Secret sharing, Secret Splitting, Key Splitting, M of N algorithms
Blakey's Scheme
Shamir's Scheme
Symmetric (secret key) encryption:
Advanced Encryption Standard (AES), winner of NIST competition, also known as Rijndael
Blowfish
Twofish
Threefish
Data Encryption Standard (DES), sometimes DE Algorithm, winner of NBS selection competition, replaced by AES for most purposes
IDEA
RC4 (cipher)
Tiny Encryption Algorithm (TEA)
Salsa20, and its updated variant ChaCha20
Post-quantum cryptography
Proof-of-work algorithms
Digital logic
Boolean minimization
Quine–McCluskey algorithm: Also called as Q-M algorithm, programmable method for simplifying the boolean equations.
Petrick's method: Another algorithm for boolean simplification.
Espresso heuristic logic minimizer: Fast algorithm for boolean function minimization.
Machine learning and statistical classification
ALOPEX: a correlation-based machine-learning algorithm
Association rule learning: discover interesting relations between variables, used in data mining
Apriori algorithm
Eclat algorithm
FP-growth algorithm
One-attribute rule
Zero-attribute rule
Boosting (meta-algorithm): Use many weak learners to boost effectiveness
AdaBoost: adaptive boosting
BrownBoost: a boosting algorithm that may be robust to noisy datasets
LogitBoost: logistic regression boosting
LPBoost: linear programming boosting
Bootstrap aggregating (bagging): technique to improve stability and classification accuracy
Computer Vision
Grabcut based on Graph cuts
Decision Trees
C4.5 algorithm: an extension to ID3
ID3 algorithm (Iterative Dichotomiser 3): use heuristic to generate small decision trees
Clustering: a class of unsupervised learning algorithms for grouping and bucketing related input vector.
k-nearest neighbors (k-NN): a method for classifying objects based on closest training examples in the feature space
Linde–Buzo–Gray algorithm: a vector quantization algorithm used to derive a good codebook
Locality-sensitive hashing (LSH): a method of performing probabilistic dimension reduction of high-dimensional data
Neural Network
Backpropagation: A supervised learning method which requires a teacher that knows, or can calculate, the desired output for any given input
Hopfield net: a Recurrent neural network in which all connections are symmetric
Perceptron: the simplest kind of feedforward neural network: a linear classifier.
Pulse-coupled neural networks (PCNN): Neural models proposed by modeling a cat's visual cortex and developed for high-performance biomimetic image processing.
Radial basis function network: an artificial neural network that uses radial basis functions as activation functions
Self-organizing map: an unsupervised network that produces a low-dimensional representation of the input space of the training samples
Random forest: classify using many decision trees
Reinforcement learning:
Q-learning: learns an action-value function that gives the expected utility of taking a given action in a given state and following a fixed policy thereafter
State–Action–Reward–State–Action (SARSA): learn a Markov decision process policy
Temporal difference learning
Relevance-Vector Machine (RVM): similar to SVM, but provides probabilistic classification
Supervised learning: Learning by examples (labelled data-set split into training-set and test-set)
Support-Vector Machine (SVM): a set of methods which divide multidimensional data by finding a dividing hyperplane with the maximum margin between the two sets
Structured SVM: allows training of a classifier for general structured output labels.
Winnow algorithm: related to the perceptron, but uses a multiplicative weight-update scheme
Programming language theory
C3 linearization: an algorithm used primarily to obtain a consistent linearization of a multiple inheritance hierarchy in object-oriented programming
Chaitin's algorithm: a bottom-up, graph coloring register allocation algorithm that uses cost/degree as its spill metric
Hindley–Milner type inference algorithm
Rete algorithm: an efficient pattern matching algorithm for implementing production rule systems
Sethi-Ullman algorithm: generates optimal code for arithmetic expressions
Parsing
CYK algorithm: An O(n3) algorithm for parsing context-free grammars in Chomsky normal form
Earley parser: Another O(n3) algorithm for parsing any context-free grammar
GLR parser:An algorithm for parsing any context-free grammar by Masaru Tomita. It is tuned for deterministic grammars, on which it performs almost linear time and O(n3) in worst case.
Inside-outside algorithm: An O(n3) algorithm for re-estimating production probabilities in probabilistic context-free grammars
LL parser: A relatively simple linear time parsing algorithm for a limited class of context-free grammars
LR parser: A more complex linear time parsing algorithm for a larger class of context-free grammars. Variants:
Canonical LR parser
LALR (look-ahead LR) parser
Operator-precedence parser
SLR (Simple LR) parser
Simple precedence parser
Packrat parser: A linear time parsing algorithm supporting some context-free grammars and parsing expression grammars
Recursive descent parser: A top-down parser suitable for LL(k) grammars
Shunting-yard algorithm: convert an infix-notation math expression to postfix
Pratt parser
Lexical analysis
Quantum algorithms
Deutsch–Jozsa algorithm: criterion of balance for Boolean function
Grover's algorithm: provides quadratic speedup for many search problems
Shor's algorithm: provides exponential speedup (relative to currently known non-quantum algorithms) for factoring a number
Simon's algorithm: provides a provably exponential speedup (relative to any non-quantum algorithm) for a black-box problem
Theory of computation and automata
Hopcroft's algorithm, Moore's algorithm, and Brzozowski's algorithm: algorithms for minimizing the number of states in a deterministic finite automaton
Powerset construction: Algorithm to convert nondeterministic automaton to deterministic automaton.
Tarski–Kuratowski algorithm: a non-deterministic algorithm which provides an upper bound for the complexity of formulas in the arithmetical hierarchy and analytical hierarchy
Information theory and signal processing
Coding theory
Error detection and correction
BCH Codes
Berlekamp–Massey algorithm
Peterson–Gorenstein–Zierler algorithm
Reed–Solomon error correction
BCJR algorithm: decoding of error correcting codes defined on trellises (principally convolutional codes)
Forward error correction
Gray code
Hamming codes
Hamming(7,4): a Hamming code that encodes 4 bits of data into 7 bits by adding 3 parity bits
Hamming distance: sum number of positions which are different
Hamming weight (population count): find the number of 1 bits in a binary word
Redundancy checks
Adler-32
Cyclic redundancy check
Damm algorithm
Fletcher's checksum
Longitudinal redundancy check (LRC)
Luhn algorithm: a method of validating identification numbers
Luhn mod N algorithm: extension of Luhn to non-numeric characters
Parity: simple/fast error detection technique
Verhoeff algorithm
Lossless compression algorithms
Burrows–Wheeler transform: preprocessing useful for improving lossless compression
Context tree weighting
Delta encoding: aid to compression of data in which sequential data occurs frequently
Dynamic Markov compression: Compression using predictive arithmetic coding
Dictionary coders
Byte pair encoding (BPE)
Deflate
Lempel–Ziv
LZ77 and LZ78
Lempel–Ziv Jeff Bonwick (LZJB)
Lempel–Ziv–Markov chain algorithm (LZMA)
Lempel–Ziv–Oberhumer (LZO): speed oriented
Lempel–Ziv–Stac (LZS)
Lempel–Ziv–Storer–Szymanski (LZSS)
Lempel–Ziv–Welch (LZW)
LZWL: syllable-based variant
LZX
Lempel–Ziv Ross Williams (LZRW)
Entropy encoding: coding scheme that assigns codes to symbols so as to match code lengths with the probabilities of the symbols
Arithmetic coding: advanced entropy coding
Range encoding: same as arithmetic coding, but looked at in a slightly different way
Huffman coding: simple lossless compression taking advantage of relative character frequencies
Adaptive Huffman coding: adaptive coding technique based on Huffman coding
Package-merge algorithm: Optimizes Huffman coding subject to a length restriction on code strings
Shannon–Fano coding
Shannon–Fano–Elias coding: precursor to arithmetic encoding
Entropy coding with known entropy characteristics
Golomb coding: form of entropy coding that is optimal for alphabets following geometric distributions
Rice coding: form of entropy coding that is optimal for alphabets following geometric distributions
Truncated binary encoding
Unary coding: code that represents a number n with n ones followed by a zero
Universal codes: encodes positive integers into binary code words
Elias delta, gamma, and omega coding
Exponential-Golomb coding
Fibonacci coding
Levenshtein coding
Fast Efficient & Lossless Image Compression System (FELICS): a lossless image compression algorithm
Incremental encoding: delta encoding applied to sequences of strings
Prediction by partial matching (PPM): an adaptive statistical data compression technique based on context modeling and prediction
Run-length encoding: lossless data compression taking advantage of strings of repeated characters
SEQUITUR algorithm: lossless compression by incremental grammar inference on a string
Lossy compression algorithms
3Dc: a lossy data compression algorithm for normal maps
Audio and Speech compression
A-law algorithm: standard companding algorithm
Code-excited linear prediction (CELP): low bit-rate speech compression
Linear predictive coding (LPC): lossy compression by representing the spectral envelope of a digital signal of speech in compressed form
Mu-law algorithm: standard analog signal compression or companding algorithm
Warped Linear Predictive Coding (WLPC)
Image compression
Block Truncation Coding (BTC): a type of lossy image compression technique for greyscale images
Embedded Zerotree Wavelet (EZW)
Fast Cosine Transform algorithms (FCT algorithms): computes Discrete Cosine Transform (DCT) efficiently
Fractal compression: method used to compress images using fractals
Set Partitioning in Hierarchical Trees (SPIHT)
Wavelet compression: form of data compression well suited for image compression (sometimes also video compression and audio compression)
Transform coding: type of data compression for "natural" data like audio signals or photographic images
Video compression
Vector quantization: technique often used in lossy data compression
Digital signal processing
Adaptive-additive algorithm (AA algorithm): find the spatial frequency phase of an observed wave source
Discrete Fourier transform: determines the frequencies contained in a (segment of a) signal
Bluestein's FFT algorithm
Bruun's FFT algorithm
Cooley–Tukey FFT algorithm
Fast Fourier transform
Prime-factor FFT algorithm
Rader's FFT algorithm
Fast folding algorithm: an efficient algorithm for the detection of approximately periodic events within time series data
Gerchberg–Saxton algorithm: Phase retrieval algorithm for optical planes
Goertzel algorithm: identify a particular frequency component in a signal. Can be used for DTMF digit decoding.
Karplus-Strong string synthesis: physical modelling synthesis to simulate the sound of a hammered or plucked string or some types of percussion
Image processing
Contrast Enhancement
Histogram equalization: use histogram to improve image contrast
Adaptive histogram equalization: histogram equalization which adapts to local changes in contrast
Connected-component labeling: find and label disjoint regions
Dithering and half-toning
Error diffusion
Floyd–Steinberg dithering
Ordered dithering
Riemersma dithering
Elser difference-map algorithm: a search algorithm for general constraint satisfaction problems. Originally used for X-Ray diffraction microscopy
Feature detection
Canny edge detector: detect a wide range of edges in images
Generalised Hough transform
Hough transform
Marr–Hildreth algorithm: an early edge detection algorithm
SIFT (Scale-invariant feature transform): is an algorithm to detect and describe local features in images.
: is a robust local feature detector, first presented by Herbert Bay et al. in 2006, that can be used in computer vision tasks like object recognition or 3D reconstruction. It is partly inspired by the SIFT descriptor. The standard version of SURF is several times faster than SIFT and claimed by its authors to be more robust against different image transformations than SIFT.
Richardson–Lucy deconvolution: image de-blurring algorithm
Blind deconvolution: image de-blurring algorithm when point spread function is unknown.
Median filtering
Seam carving: content-aware image resizing algorithm
Segmentation: partition a digital image into two or more regions
GrowCut algorithm: an interactive segmentation algorithm
Random walker algorithm
Region growing
Watershed transformation: a class of algorithms based on the watershed analogy
Software engineering
Cache algorithms
CHS conversion: converting between disk addressing systems
Double dabble: Convert binary numbers to BCD
Hash Function: convert a large, possibly variable-sized amount of data into a small datum, usually a single integer that may serve as an index into an array
Fowler–Noll–Vo hash function: fast with low collision rate
Pearson hashing: computes 8 bit value only, optimized for 8 bit computers
Zobrist hashing: used in the implementation of transposition tables
Unicode Collation Algorithm
Xor swap algorithm: swaps the values of two variables without using a buffer
Database algorithms
Algorithms for Recovery and Isolation Exploiting Semantics (ARIES): transaction recovery
Join algorithms
Block nested loop
Hash join
Nested loop join
Sort-Merge Join
Distributed systems algorithms
Clock synchronization
Berkeley algorithm
Cristian's algorithm
Intersection algorithm
Marzullo's algorithm
Consensus (computer science): agreeing on a single value or history among unreliable processors
Chandra–Toueg consensus algorithm
Paxos algorithm
Raft (computer science)
Detection of Process Termination
Dijkstra-Scholten algorithm
Huang's algorithm
Lamport ordering: a partial ordering of events based on the happened-before relation
Leader election: a method for dynamically selecting a coordinator
Bully algorithm
Mutual exclusion
Lamport's Distributed Mutual Exclusion Algorithm
Naimi-Trehel's log(n) Algorithm
Maekawa's Algorithm
Raymond's Algorithm
Ricart–Agrawala Algorithm
Snapshot algorithm: record a consistent global state for an asynchronous system
Chandy–Lamport algorithm
Vector clocks: generate a partial ordering of events in a distributed system and detect causality violations
Memory allocation and deallocation algorithms
Buddy memory allocation: Algorithm to allocate memory such that fragmentation is less.
Garbage collectors
Cheney's algorithm: An improvement on the Semi-space collector
Generational garbage collector: Fast garbage collectors that segregate memory by age
Mark-compact algorithm: a combination of the mark-sweep algorithm and Cheney's copying algorithm
Mark and sweep
Semi-space collector: An early copying collector
Reference counting
Networking
Karn's algorithm: addresses the problem of getting accurate estimates of the round-trip time for messages when using TCP
Luleå algorithm: a technique for storing and searching internet routing tables efficiently
Network congestion
Exponential backoff
Nagle's algorithm: improve the efficiency of TCP/IP networks by coalescing packets
Truncated binary exponential backoff
Operating systems algorithms
Banker's algorithm: Algorithm used for deadlock avoidance.
Page replacement algorithms: Selecting the victim page under low memory conditions.
Adaptive replacement cache: better performance than LRU
Clock with Adaptive Replacement (CAR): is a page replacement algorithm that has performance comparable to Adaptive replacement cache
Process synchronization
Dekker's algorithm
Lamport's Bakery algorithm
Peterson's algorithm
Scheduling
Earliest deadline first scheduling
Fair-share scheduling
Least slack time scheduling
List scheduling
Multi level feedback queue
Rate-monotonic scheduling
Round-robin scheduling
Shortest job next
Shortest remaining time
Top-nodes algorithm: resource calendar management
I/O scheduling
Disk scheduling
Elevator algorithm: Disk scheduling algorithm that works like an elevator.
Shortest seek first: Disk scheduling algorithm to reduce seek time.
Other
'For You' algorithm: a proprietary algorithm developed by the social media network Tik-Tok. Uploaded videos are released first to a selection of users who have been identified by the algorithm as being likely to engage with the video, based on their previous web-site viewing patterns.
See also
List of data structures
List of machine learning algorithms
List of pathfinding algorithms
List of algorithm general topics
List of terms relating to algorithms and data structures
Heuristic
References
Algorithms |
19001 | https://en.wikipedia.org/wiki/Microsoft | Microsoft | Microsoft Corporation is an American multinational technology corporation which produces computer software, consumer electronics, personal computers, and related services. Its best-known software products are the Microsoft Windows line of operating systems, the Microsoft Office suite, and the Internet Explorer and Edge web browsers. Its flagship hardware products are the Xbox video game consoles and the Microsoft Surface lineup of touchscreen personal computers. Microsoft ranked No. 21 in the 2020 Fortune 500 rankings of the largest United States corporations by total revenue; it was the world's largest software maker by revenue as of 2016. It is one of the Big Five American information technology companies, alongside Alphabet, Amazon, Apple, and Meta.
Microsoft (the word being a portmanteau of "microcomputer software") was founded by Bill Gates and Paul Allen on April 4, 1975, to develop and sell BASIC interpreters for the Altair 8800. It rose to dominate the personal computer operating system market with MS-DOS in the mid-1980s, followed by Microsoft Windows. The company's 1986 initial public offering (IPO), and subsequent rise in its share price, created three billionaires and an estimated 12,000 millionaires among Microsoft employees. Since the 1990s, it has increasingly diversified from the operating system market and has made a number of corporate acquisitions, their largest being the acquisition of LinkedIn for $26.2 billion in December 2016, followed by their acquisition of Skype Technologies for $8.5 billion in May 2011.
, Microsoft is market-dominant in the IBM PC compatible operating system market and the office software suite market, although it has lost the majority of the overall operating system market to Android. The company also produces a wide range of other consumer and enterprise software for desktops, laptops, tabs, gadgets, and servers, including Internet search (with Bing), the digital services market (through MSN), mixed reality (HoloLens), cloud computing (Azure), and software development (Visual Studio).
Steve Ballmer replaced Gates as CEO in 2000, and later envisioned a "devices and services" strategy. This unfolded with Microsoft acquiring Danger Inc. in 2008, entering the personal computer production market for the first time in June 2012 with the launch of the Microsoft Surface line of tablet computers, and later forming Microsoft Mobile through the acquisition of Nokia's devices and services division. Since Satya Nadella took over as CEO in 2014, the company has scaled back on hardware and has instead focused on cloud computing, a move that helped the company's shares reach its highest value since December 1999.
Earlier dethroned by Apple in 2010, in 2018 Microsoft reclaimed its position as the most valuable publicly traded company in the world. In April 2019, Microsoft reached the market cap, becoming the third U.S. public company to be valued at over $1 trillion after Apple and Amazon respectively. , Microsoft has the third-highest global brand valuation.
History
1972–1985: Founding
Childhood friends Bill Gates and Paul Allen sought to make a business using their skills in computer programming. In 1972, they founded Traf-O-Data, which sold a rudimentary computer to track and analyze automobile traffic data. Gates enrolled at Harvard University while Allen pursued a degree in computer science at Washington State University, though he later dropped out to work at Honeywell. The January 1975 issue of Popular Electronics featured Micro Instrumentation and Telemetry Systems's (MITS) Altair 8800 microcomputer, which inspired Allen to suggest that they could program a BASIC interpreter for the device. Gates called MITS and claimed that he had a working interpreter, and MITS requested a demonstration. Allen worked on a simulator for the Altair while Gates developed the interpreter, and it worked flawlessly when they demonstrated it to MITS in March 1975 in Albuquerque, New Mexico. MITS agreed to distribute it, marketing it as Altair BASIC. Gates and Allen established Microsoft on April 4, 1975, with Gates as CEO, and Allen suggested the name "Micro-Soft", short for micro-computer software. In August 1977, the company formed an agreement with ASCII Magazine in Japan, resulting in its first international office of ASCII Microsoft. Microsoft moved its headquarters to Bellevue, Washington, in January 1979.
Microsoft entered the operating system (OS) business in 1980 with its own version of Unix called Xenix, but it was MS-DOS that solidified the company's dominance. IBM awarded a contract to Microsoft in November 1980 to provide a version of the CP/M OS to be used in the IBM Personal Computer (IBM PC). For this deal, Microsoft purchased a CP/M clone called 86-DOS from Seattle Computer Products which it branded as MS-DOS, although IBM rebranded it to IBM PC DOS. Microsoft retained ownership of MS-DOS following the release of the IBM PC in August 1981. IBM had copyrighted the IBM PC BIOS, so other companies had to reverse engineer it in order for non-IBM hardware to run as IBM PC compatibles, but no such restriction applied to the operating systems. Microsoft eventually became the leading PC operating systems vendor. The company expanded into new markets with the release of the Microsoft Mouse in 1983, as well as with a publishing division named Microsoft Press.
Paul Allen resigned from Microsoft in 1983 after developing Hodgkin's disease. Allen claimed in Idea Man: A Memoir by the Co-founder of Microsoft that Gates wanted to dilute his share in the company when he was diagnosed with Hodgkin's disease because he did not think that he was working hard enough. Allen later invested in low-tech sectors, sports teams, commercial real estate, neuroscience, private space flight, and more.
1985–1994: Windows and Office
Microsoft released Microsoft Windows on November 20, 1985, as a graphical extension for MS-DOS, despite having begun jointly developing OS/2 with IBM the previous August. Microsoft moved its headquarters from Bellevue to Redmond, Washington, on February 26, 1986, and went public on March 13, with the resulting rise in stock making an estimated four billionaires and 12,000 millionaires from Microsoft employees. Microsoft released its version of OS/2 to original equipment manufacturers (OEMs) on April 2, 1987. In 1990, the Federal Trade Commission examined Microsoft for possible collusion due to the partnership with IBM, marking the beginning of more than a decade of legal clashes with the government. Meanwhile, the company was at work on Microsoft Windows NT, which was heavily based on their copy of the OS/2 code. It shipped on July 21, 1993, with a new modular kernel and the 32-bit Win32 application programming interface (API), making it easier to port from 16-bit (MS-DOS-based) Windows. Microsoft informed IBM of Windows NT, and the OS/2 partnership deteriorated.
In 1990, Microsoft introduced the Microsoft Office suite which bundled separate applications such as Microsoft Word and Microsoft Excel. On May 22, Microsoft launched Windows 3.0, featuring streamlined user interface graphics and improved protected mode capability for the Intel 386 processor, and both Office and Windows became dominant in their respective areas.
On July 27, 1994, the Department of Justice's Antitrust Division filed a competitive impact statement which said: "Beginning in 1988 and continuing until July 15, 1994, Microsoft induced many OEMs to execute anti-competitive 'per processor' licenses. Under a per-processor license, an OEM pays Microsoft a royalty for each computer it sells containing a particular microprocessor, whether the OEM sells the computer with a Microsoft operating system or a non-Microsoft operating system. In effect, the royalty payment to Microsoft when no Microsoft product is being used acts as a penalty, or tax, on the OEM's use of a competing PC operating system. Since 1988, Microsoft's use of per processor licenses has increased."
1995–2007: Foray into the Web, Windows 95, Windows XP, and Xbox
Following Bill Gates' internal "Internet Tidal Wave memo" on May 26, 1995, Microsoft began to redefine its offerings and expand its product line into computer networking and the World Wide Web. With a few exceptions of new companies, like Netscape, Microsoft was the only major and established company that acted fast enough to be a part of the World Wide Web practically from the start. Other companies like Borland, WordPerfect, Novell, IBM and Lotus, being much slower to adapt to the new situation, would give Microsoft a market dominance. The company released Windows 95 on August 24, 1995, featuring pre-emptive multitasking, a completely new user interface with a novel start button, and 32-bit compatibility; similar to NT, it provided the Win32 API. Windows 95 came bundled with the online service MSN, which was at first intended to be a competitor to the Internet, and (for OEMs) Internet Explorer, a Web browser. Internet Explorer was not bundled with the retail Windows 95 boxes, because the boxes were printed before the team finished the Web browser, and instead was included in the Windows 95 Plus! pack. Backed by a high-profile marketing campaign and what The New York Times called "the splashiest, most frenzied, most expensive introduction of a computer product in the industry's history," Windows 95 quickly became a success. Branching out into new markets in 1996, Microsoft and General Electric's NBC unit created a new 24/7 cable news channel, MSNBC. Microsoft created Windows CE 1.0, a new OS designed for devices with low memory and other constraints, such as personal digital assistants. In October 1997, the Justice Department filed a motion in the Federal District Court, stating that Microsoft violated an agreement signed in 1994 and asked the court to stop the bundling of Internet Explorer with Windows.
On January 13, 2000, Bill Gates handed over the CEO position to Steve Ballmer, an old college friend of Gates and employee of the company since 1980, while creating a new position for himself as Chief Software Architect. Various companies including Microsoft formed the Trusted Computing Platform Alliance in October 1999 to (among other things) increase security and protect intellectual property through identifying changes in hardware and software. Critics decried the alliance as a way to enforce indiscriminate restrictions over how consumers use software, and over how computers behave, and as a form of digital rights management: for example the scenario where a computer is not only secured for its owner, but also secured against its owner as well. On April 3, 2000, a judgment was handed down in the case of United States v. Microsoft Corp., calling the company an "abusive monopoly." Microsoft later settled with the U.S. Department of Justice in 2004. On October 25, 2001, Microsoft released Windows XP, unifying the mainstream and NT lines of OS under the NT codebase. The company released the Xbox later that year, entering the video game console market dominated by Sony and Nintendo. In March 2004 the European Union brought antitrust legal action against the company, citing it abused its dominance with the Windows OS, resulting in a judgment of €497 million ($613 million) and requiring Microsoft to produce new versions of Windows XP without Windows Media Player: Windows XP Home Edition N and Windows XP Professional N. In November 2005, the company's second video game console, the Xbox 360, was released. There were two versions, a basic version for $299.99 and a deluxe version for $399.99.
Increasingly present in the hardware business following Xbox, Microsoft in 2006 released the Zune series of digital media players, a successor of its previous software platform Portable Media Center. These expanded on previous hardware commitments from Microsoft following its original Microsoft Mouse in 1983; as of 2007 the company sold the best-selling wired keyboard (Natural Ergonomic Keyboard 4000), mouse (IntelliMouse), and desktop webcam (LifeCam) in the United States. That year the company also launched the Surface "digital table", later renamed PixelSense.
2007–2011: Microsoft Azure, Windows Vista, Windows 7, and Microsoft Stores
Released in January 2007, the next version of Windows, Vista, focused on features, security and a redesigned user interface dubbed Aero. Microsoft Office 2007, released at the same time, featured a "Ribbon" user interface which was a significant departure from its predecessors. Relatively strong sales of both products helped to produce a record profit in 2007. The European Union imposed another fine of €899 million ($1.4 billion) for Microsoft's lack of compliance with the March 2004 judgment on February 27, 2008, saying that the company charged rivals unreasonable prices for key information about its workgroup and backoffice servers. Microsoft stated that it was in compliance and that "these fines are about the past issues that have been resolved". 2007 also saw the creation of a multi-core unit at Microsoft, following the steps of server companies such as Sun and IBM.
Gates retired from his role as Chief Software Architect on June 27, 2008, a decision announced in June 2006, while retaining other positions related to the company in addition to being an advisor for the company on key projects. Azure Services Platform, the company's entry into the cloud computing market for Windows, launched on October 27, 2008. On February 12, 2009, Microsoft announced its intent to open a chain of Microsoft-branded retail stores, and on October 22, 2009, the first retail Microsoft Store opened in Scottsdale, Arizona; the same day Windows 7 was officially released to the public. Windows 7's focus was on refining Vista with ease-of-use features and performance enhancements, rather than an extensive reworking of Windows.
As the smartphone industry boomed in the late 2000s, Microsoft had struggled to keep up with its rivals in providing a modern smartphone operating system, falling behind Apple and Google-sponsored Android in the United States. As a result, in 2010 Microsoft revamped their aging flagship mobile operating system, Windows Mobile, replacing it with the new Windows Phone OS that was released in October that year. It used a new user interface design language, codenamed "Metro", which prominently used simple shapes, typography and iconography, utilizing the concept of minimalism. Microsoft implemented a new strategy for the software industry, providing a consistent user experience across all smartphones using the Windows Phone OS. It launched an alliance with Nokia in 2011 and Microsoft worked closely with the company to co-develop Windows Phone, but remained partners with long-time Windows Mobile OEM HTC. Microsoft is a founding member of the Open Networking Foundation started on March 23, 2011. Fellow founders were Google, HP Networking, Yahoo!, Verizon Communications, Deutsche Telekom and 17 other companies. This nonprofit organization is focused on providing support for a cloud computing initiative called Software-Defined Networking. The initiative is meant to speed innovation through simple software changes in telecommunications networks, wireless networks, data centers and other networking areas.
2011–2014: Windows 8/8.1, Xbox One, Outlook.com, and Surface devices
Following the release of Windows Phone, Microsoft undertook a gradual rebranding of its product range throughout 2011 and 2012, with the corporation's logos, products, services and websites adopting the principles and concepts of the Metro design language. Microsoft unveiled Windows 8, an operating system designed to power both personal computers and tablet computers, in Taipei in June 2011. A developer preview was released on September 13, which was subsequently replaced by a consumer preview on February 29, 2012, and released to the public in May. The Surface was unveiled on June 18, becoming the first computer in the company's history to have its hardware made by Microsoft. On June 25, Microsoft paid US$1.2 billion to buy the social network Yammer. On July 31, they launched the Outlook.com webmail service to compete with Gmail. On September 4, 2012, Microsoft released Windows Server 2012.
In July 2012, Microsoft sold its 50% stake in MSNBC, which it had run as a joint venture with NBC since 1996. On October 1, Microsoft announced its intention to launch a news operation, part of a new-look MSN, with Windows 8 later in the month. On October 26, 2012, Microsoft launched Windows 8 and the Microsoft Surface. Three days later, Windows Phone 8 was launched. To cope with the potential for an increase in demand for products and services, Microsoft opened a number of "holiday stores" across the U.S. to complement the increasing number of "bricks-and-mortar" Microsoft Stores that opened in 2012. On March 29, 2013, Microsoft launched a Patent Tracker.
In August 2012, the New York City Police Department announced a partnership with Microsoft for the development of the Domain Awareness System which is used for Police surveillance in New York City.
The Kinect, a motion-sensing input device made by Microsoft and designed as a video game controller, first introduced in November 2010, was upgraded for the 2013 release of the Xbox One video game console. Kinect's capabilities were revealed in May 2013: an ultra-wide 1080p camera, function in the dark due to an infrared sensor, higher-end processing power and new software, the ability to distinguish between fine movements (such as a thumb movement), and determining a user's heart rate by looking at their face. Microsoft filed a patent application in 2011 that suggests that the corporation may use the Kinect camera system to monitor the behavior of television viewers as part of a plan to make the viewing experience more interactive. On July 19, 2013, Microsoft stocks suffered their biggest one-day percentage sell-off since the year 2000, after its fourth-quarter report raised concerns among the investors on the poor showings of both Windows 8 and the Surface tablet. Microsoft suffered a loss of more than US$32 billion.
In line with the maturing PC business, in July 2013, Microsoft announced that it would reorganize the business into four new business divisions, namely Operating System, Apps, Cloud, and Devices. All previous divisions will be dissolved into new divisions without any workforce cuts. On September 3, 2013, Microsoft agreed to buy Nokia's mobile unit for $7 billion, following Amy Hood taking the role of CFO.
2014–2020: Windows 10, Microsoft Edge, and HoloLens
On February 4, 2014, Steve Ballmer stepped down as CEO of Microsoft and was succeeded by Satya Nadella, who previously led Microsoft's Cloud and Enterprise division. On the same day, John W. Thompson took on the role of chairman, in place of Bill Gates, who continued to participate as a technology advisor. Thompson became the second chairman in Microsoft's history. On April 25, 2014, Microsoft acquired Nokia Devices and Services for $7.2 billion. This new subsidiary was renamed Microsoft Mobile Oy. On September 15, 2014, Microsoft acquired the video game development company Mojang, best known for Minecraft, for $2.5 billion. On June 8, 2017, Microsoft acquired Hexadite, an Israeli security firm, for $100 million.
On January 21, 2015, Microsoft announced the release of their first Interactive whiteboard, Microsoft Surface Hub. On July 29, 2015, Windows 10 was released, with its server sibling, Windows Server 2016, released in September 2016. In Q1 2015, Microsoft was the third largest maker of mobile phones, selling 33 million units (7.2% of all). While a large majority (at least 75%) of them do not run any version of Windows Phone— those other phones are not categorized as smartphones by Gartner in the same time frame 8 million Windows smartphones (2.5% of all smartphones) were made by all manufacturers (but mostly by Microsoft). Microsoft's share of the U.S. smartphone market in January 2016 was 2.7%. During the summer of 2015 the company lost $7.6 billion related to its mobile-phone business, firing 7,800 employees.
On March 1, 2016, Microsoft announced the merger of its PC and Xbox divisions, with Phil Spencer announcing that Universal Windows Platform (UWP) apps would be the focus for Microsoft's gaming in the future. On January 24, 2017, Microsoft showcased Intune for Education at the BETT 2017 education technology conference in London. Intune for Education is a new cloud-based application and device management service for the education sector. In May 2016, the company announced it was laying off 1,850 workers, and taking an impairment and restructuring charge of $950 million. In June 2016, Microsoft announced a project named Microsoft Azure Information Protection. It aims to help enterprises protect their data as it moves between servers and devices. In November 2016, Microsoft joined the Linux Foundation as a Platinum member during Microsoft's Connect(); developer event in New York. The cost of each Platinum membership is US$500,000 per year. Some analysts deemed this unthinkable ten years prior, however, as in 2001 then-CEO Steve Ballmer called Linux "cancer". Microsoft planned to launch a preview of Intune for Education "in the coming weeks", with general availability scheduled for spring 2017, priced at $30 per device, or through volume licensing agreements.
In January 2018, Microsoft patched Windows 10 to account for CPU problems related to Intel's Meltdown security breach. The patch led to issues with the Microsoft Azure virtual machines reliant on Intel's CPU architecture. On January 12, Microsoft released PowerShell Core 6.0 for the macOS and Linux operating systems. In February 2018, Microsoft killed notification support for their Windows Phone devices which effectively ended firmware updates for the discontinued devices. In March 2018, Microsoft recalled Windows 10 S to change it to a mode for the Windows operating system rather than a separate and unique operating system. In March the company also established guidelines that censor users of Office 365 from using profanity in private documents. In April 2018, Microsoft released the source code for Windows File Manager under the MIT License to celebrate the program's 20th anniversary. In April the company further expressed willingness to embrace open source initiatives by announcing Azure Sphere as its own derivative of the Linux operating system. In May 2018, Microsoft partnered with 17 American intelligence agencies to develop cloud computing products. The project is dubbed "Azure Government" and has ties to the Joint Enterprise Defense Infrastructure (JEDI) surveillance program. On June 4, 2018, Microsoft officially announced the acquisition of GitHub for $7.5 billion, a deal that closed on October 26, 2018. On July 10, 2018, Microsoft revealed the Surface Go platform to the public. Later in the month it converted Microsoft Teams to gratis. In August 2018, Microsoft released two projects called Microsoft AccountGuard and Defending Democracy. It also unveiled Snapdragon 850 compatibility for Windows 10 on the ARM architecture.
In August 2018, Toyota Tsusho began a partnership with Microsoft to create fish farming tools using the Microsoft Azure application suite for Internet of things (IoT) technologies related to water management. Developed in part by researchers from Kindai University, the water pump mechanisms use artificial intelligence to count the number of fish on a conveyor belt, analyze the number of fish, and deduce the effectiveness of water flow from the data the fish provide. The specific computer programs used in the process fall under the Azure Machine Learning and the Azure IoT Hub platforms. In September 2018, Microsoft discontinued Skype Classic. On October 10, 2018, Microsoft joined the Open Invention Network community despite holding more than 60,000 patents. In November 2018, Microsoft agreed to supply 100,000 Microsoft HoloLens headsets to the United States military in order to "increase lethality by enhancing the ability to detect, decide and engage before the enemy." In November 2018, Microsoft introduced Azure Multi-Factor Authentication for Microsoft Azure. In December 2018, Microsoft announced Project Mu, an open source release of the Unified Extensible Firmware Interface (UEFI) core used in Microsoft Surface and Hyper-V products. The project promotes the idea of Firmware as a Service. In the same month, Microsoft announced the open source implementation of Windows Forms and the Windows Presentation Foundation (WPF) which will allow for further movement of the company toward the transparent release of key frameworks used in developing Windows desktop applications and software. December also saw the company discontinue the Microsoft Edge project in favor of Chromium backends for their browsers.
On February 20, 2019, Microsoft Corp said it will offer its cyber security service AccountGuard to 12 new markets in Europe including Germany, France and Spain, to close security gaps and protect customers in political space from hacking. In February 2019, hundreds of Microsoft employees protested the company's war profiteering from a $480 million contract to develop virtual reality headsets for the United States Army.
2020–present: Acquisitions, Xbox Series X/S, and Windows 11
On March 26, 2020, Microsoft announced it was acquiring Affirmed Networks for about $1.35 billion. Due to the COVID-19 pandemic, Microsoft closed all of its retail stores indefinitely due to health concerns. On July 22, 2020, Microsoft announced plans to close its Mixer service, planning to move existing partners to Facebook Gaming.
On July 31, 2020, it was reported that Microsoft was in talks to acquire TikTok after the Trump administration ordered ByteDance to divest ownership of the application to the U.S. On August 3, 2020, after speculation on the deal, Donald Trump stated that Microsoft could buy the application, however it should be completed by September 15, 2020, and that the United States Department of the Treasury should receive a portion if it were to go through.
On August 5, 2020, Microsoft stopped its xCloud game streaming test for iOS devices. According to Microsoft, the future of xCloud on iOS remains unclear and potentially out of Microsoft's hands. Apple has imposed a strict limit on "remote desktop clients" that means applications are only allowed to connect to a user-owned host device or gaming console owned by the user. On September 21, 2020, Microsoft announced its intent to acquire video game company ZeniMax Media, the parent company of Bethesda Softworks, for about $7.5 billion, with the deal expected to be occurred in the second half of 2021 fiscal year. On March 9, 2021, the acquisition was finalized and ZeniMax Media became part of Microsoft's Xbox Game Studios division. The total price of the deal was $8.1 billion.
On September 22, 2020, Microsoft announced that it had an exclusive license to use OpenAI’s GPT-3 artificial intelligence language generator. The previous version of GPT-3, called GPT-2, made headlines for being “too dangerous to release” and had numerous capabilities, including designing websites, prescribing medication, answering questions and penning articles. On November 10, 2020, Microsoft released the Xbox Series X and Xbox Series S video game consoles.
In April 2021, Microsoft said that it will buy Nuance Communications for about $16 billion in cash. In 2021, in part due to the strong quarterly earnings spurred by the COVID-19 pandemic, Microsoft's valuation came to near $2 trillion. The increased necessity for remote work and distance education drove up the demand for cloud-computing services and grew the company's gaming sales.
On June 24, 2021, Microsoft announced Windows 11 during a livestream. The announcement came with confusion after Microsoft announced Windows 10 would be the last version of the operating system; set to be released in Fall 2021. It was released to the general public on October 5, 2021.
In October 2021, Microsoft announced that it began rolling out end-to-end encryption (E2EE) support for Microsoft Teams calls in order to secure business communication while using video conferencing software. Users can ensure that their calls are encrypted and can utilize a security code which both parties on a call must verify on respective ends. On October 7, Microsoft acquired Ally.io, a software service that measures companies' progress against OKRs. Microsoft plans to incorporate Ally.io into its Viva family of employee experience products.
On January 18, 2022, Microsoft announced the acquisition of American video game developer and holding company Activision Blizzard in an all-cash deal worth $68.7 billion. Activision Blizzard is best known for producing franchises, including but not limited to Warcraft, Diablo, Call of Duty, StarCraft, Candy Crush Saga, and Overwatch. Activision and Microsoft each released statements saying the acquisition was to benefit their businesses in the metaverse, many saw Microsoft's acquisition of video game studios as an attempt to compete against Meta Platforms, with TheStreet referring to Microsoft wanting to become "the Disney of the metaverse". Microsoft has not released statements regarding Activision's recent legal controversies regarding employee abuse, but reports have alleged that Activision CEO Bobby Kotick, a major target of the controversy, will leave the company after the acquisition is finalized. The deal is expected to close in 2023 followed by a review from the US Federal Trade Commission.
Corporate affairs
Board of directors
The company is run by a board of directors made up of mostly company outsiders, as is customary for publicly traded companies. Members of the board of directors as of July 2020 are Satya Nadella, Reid Hoffman, Hugh Johnston, Teri List-Stoll, Sandi Peterson, Penny Pritzker, Charles Scharf, Arne Sorenson, John W. Stanton, John W. Thompson, Emma Walmsley and Padmasree Warrior. Board members are elected every year at the annual shareholders' meeting using a majority vote system. There are four committees within the board that oversee more specific matters. These committees include the Audit Committee, which handles accounting issues with the company including auditing and reporting; the Compensation Committee, which approves compensation for the CEO and other employees of the company; the Governance and Nominating Committee, which handles various corporate matters including the nomination of the board; and the Regulatory and Public Policy Committee, which includes legal/antitrust matters, along with privacy, trade, digital safety, artificial intelligence, and environmental sustainability.
On March 13, 2020, Gates announced that he is leaving the board of directors of Microsoft and Berkshire Hathaway to focus more on his philanthropic efforts. According to Aaron Tilley of The Wall Street Journal this is "marking the biggest boardroom departure in the tech industry since the death of longtime rival and Apple Inc. co-founder Steve Jobs."
On January 13, 2022, The Wall Street Journal reported the Microsoft's board of directors plans to hire an external law firm to review its sexual harassment and gender discrimination policies, and to release a summary of how the company handled past allegations of misconduct against Bill Gates and other corporate executives.
Chief executives
Bill Gates (1975–2000)
Steve Ballmer (2000–2014)
Satya Nadella (2014–present)
Financial
When Microsoft went public and launched its initial public offering (IPO) in 1986, the opening stock price was $21; after the trading day, the price closed at $27.75. As of July 2010, with the company's nine stock splits, any IPO shares would be multiplied by 288; if one were to buy the IPO today, given the splits and other factors, it would cost about 9 cents. The stock price peaked in 1999 at around $119 ($60.928, adjusting for splits). The company began to offer a dividend on January 16, 2003, starting at eight cents per share for the fiscal year followed by a dividend of sixteen cents per share the subsequent year, switching from yearly to quarterly dividends in 2005 with eight cents a share per quarter and a special one-time payout of three dollars per share for the second quarter of the fiscal year. Though the company had subsequent increases in dividend payouts, the price of Microsoft's stock remained steady for years.
Standard & Poor's and Moody's Investors Service have both given a AAA rating to Microsoft, whose assets were valued at $41 billion as compared to only $8.5 billion in unsecured debt. Consequently, in February 2011 Microsoft released a corporate bond amounting to $2.25 billion with relatively low borrowing rates compared to government bonds. For the first time in 20 years Apple Inc. surpassed Microsoft in Q1 2011 quarterly profits and revenues due to a slowdown in PC sales and continuing huge losses in Microsoft's Online Services Division (which contains its search engine Bing). Microsoft profits were $5.2 billion, while Apple Inc. profits were $6 billion, on revenues of $14.5 billion and $24.7 billion respectively. Microsoft's Online Services Division has been continuously loss-making since 2006 and in Q1 2011 it lost $726 million. This follows a loss of $2.5 billion for the year 2010.
On July 20, 2012, Microsoft posted its first quarterly loss ever, despite earning record revenues for the quarter and fiscal year, with a net loss of $492 million due to a writedown related to the advertising company aQuantive, which had been acquired for $6.2 billion back in 2007. As of January 2014, Microsoft's market capitalization stood at $314B, making it the 8th largest company in the world by market capitalization. On November 14, 2014, Microsoft overtook ExxonMobil to become the second most-valuable company by market capitalization, behind only Apple Inc. Its total market value was over $410B—with the stock price hitting $50.04 a share, the highest since early 2000. In 2015, Reuters reported that Microsoft Corp had earnings abroad of $76.4 billion which were untaxed by the Internal Revenue Service. Under U.S. law, corporations don't pay income tax on overseas profits until the profits are brought into the United States.
In November 2018, the company won a $480 million military contract with the U.S. government to bring augmented reality (AR) headset technology into the weapon repertoires of American soldiers. The two-year contract may result in follow-on orders of more than 100,000 headsets, according to documentation describing the bidding process. One of the contract's tag lines for the augmented reality technology seems to be its ability to enable "25 bloodless battles before the 1st battle", suggesting that actual combat training is going to be an essential aspect of the augmented reality headset capabilities.
Subsidiaries
Microsoft is an international business. As such, it needs subsidiaries present in whatever national markets it chooses to harvest. An example is Microsoft Canada, which it established in 1985. Other countries have similar installations, to funnel profits back up to Redmond and to distribute the dividends to the holders of MSFT stock.
Marketing
In 2004, Microsoft commissioned research firms to do independent studies comparing the total cost of ownership (TCO) of Windows Server 2003 to Linux; the firms concluded that companies found Windows easier to administrate than Linux, thus those using Windows would administrate faster resulting in lower costs for their company (i.e. lower TCO). This spurred a wave of related studies; a study by the Yankee Group concluded that upgrading from one version of Windows Server to another costs a fraction of the switching costs from Windows Server to Linux, although companies surveyed noted the increased security and reliability of Linux servers and concern about being locked into using Microsoft products. Another study, released by the Open Source Development Labs, claimed that the Microsoft studies were "simply outdated and one-sided" and their survey concluded that the TCO of Linux was lower due to Linux administrators managing more servers on average and other reasons.
As part of the "Get the Facts" campaign, Microsoft highlighted the .NET Framework trading platform that it had developed in partnership with Accenture for the London Stock Exchange, claiming that it provided "five nines" reliability. After suffering extended downtime and unreliability the London Stock Exchange announced in 2009 that it was planning to drop its Microsoft solution and switch to a Linux-based one in 2010.
In 2012, Microsoft hired a political pollster named Mark Penn, whom The New York Times called "famous for bulldozing" his political opponents as Executive Vice-president, Advertising and Strategy. Penn created a series of negative advertisements targeting one of Microsoft's chief competitors, Google. The advertisements, called "Scroogled", attempt to make the case that Google is "screwing" consumers with search results rigged to favor Google's paid advertisers, that Gmail violates the privacy of its users to place ad results related to the content of their emails and shopping results, which favor Google products. Tech publications like TechCrunch have been highly critical of the advertising campaign, while Google employees have embraced it.
Layoffs
In July 2014, Microsoft announced plans to lay off 18,000 employees. Microsoft employed 127,104 people as of June 5, 2014, making this about a 14 percent reduction of its workforce as the biggest Microsoft lay off ever. This included 12,500 professional and factory personnel. Previously, Microsoft had eliminated 5,800 jobs in 2009 in line with the Great Recession of 2008–2017. In September 2014, Microsoft laid off 2,100 people, including 747 people in the Seattle–Redmond area, where the company is headquartered. The firings came as a second wave of the layoffs that were previously announced. This brought the total number to over 15,000 out of the 18,000 expected cuts. In October 2014, Microsoft revealed that it was almost done with the elimination of 18,000 employees, which was its largest-ever layoff sweep. In July 2015, Microsoft announced another 7,800 job cuts in the next several months. In May 2016, Microsoft announced another 1,850 job cuts mostly in its Nokia mobile phone division. As a result, the company will record an impairment and restructuring charge of approximately $950 million, of which approximately $200 million will relate to severance payments.
United States government
Microsoft provides information about reported bugs in their software to intelligence agencies of the United States government, prior to the public release of the fix. A Microsoft spokesperson has stated that the corporation runs several programs that facilitate the sharing of such information with the U.S. government. Following media reports about PRISM, NSA's massive electronic surveillance program, in May 2013, several technology companies were identified as participants, including Microsoft. According to leaks of said program, Microsoft joined the PRISM program in 2007. However, in June 2013, an official statement from Microsoft flatly denied their participation in the program:
During the first six months in 2013, Microsoft had received requests that affected between 15,000 and 15,999 accounts. In December 2013, the company made statement to further emphasize the fact that they take their customers' privacy and data protection very seriously, even saying that "government snooping potentially now constitutes an 'advanced persistent threat,' alongside sophisticated malware and cyber attacks". The statement also marked the beginning of three-part program to enhance Microsoft's encryption and transparency efforts. On July 1, 2014, as part of this program they opened the first (of many) Microsoft Transparency Center, that provides "participating governments with the ability to review source code for our key products, assure themselves of their software integrity, and confirm there are no "back doors." Microsoft has also argued that the United States Congress should enact strong privacy regulations to protect consumer data.
In April 2016, the company sued the U.S. government, arguing that secrecy orders were preventing the company from disclosing warrants to customers in violation of the company's and customers' rights. Microsoft argued that it was unconstitutional for the government to indefinitely ban Microsoft from informing its users that the government was requesting their emails and other documents, and that the Fourth Amendment made it so people or businesses had the right to know if the government searches or seizes their property. On October 23, 2017, Microsoft said it would drop the lawsuit as a result of a policy change by the United States Department of Justice (DoJ). The DoJ had "changed data request rules on alerting the Internet users about agencies accessing their information."
Corporate identity
Corporate culture
Technical reference for developers and articles for various Microsoft magazines such as Microsoft Systems Journal (MSJ) are available through the Microsoft Developer Network (MSDN). MSDN also offers subscriptions for companies and individuals, and the more expensive subscriptions usually offer access to pre-release beta versions of Microsoft software. In April 2004, Microsoft launched a community site for developers and users, titled Channel 9, that provides a wiki and an Internet forum. Another community site that provides daily videocasts and other services, On10.net, launched on March 3, 2006. Free technical support is traditionally provided through online Usenet newsgroups, and CompuServe in the past, monitored by Microsoft employees; there can be several newsgroups for a single product. Helpful people can be elected by peers or Microsoft employees for Microsoft Most Valuable Professional (MVP) status, which entitles them to a sort of special social status and possibilities for awards and other benefits.
Noted for its internal lexicon, the expression "eating your own dog food" is used to describe the policy of using pre-release and beta versions of products inside Microsoft in an effort to test them in "real-world" situations. This is usually shortened to just "dog food" and is used as noun, verb, and adjective. Another bit of jargon, FYIFV or FYIV ("Fuck You, I'm [Fully] Vested"), is used by an employee to indicate they are financially independent and can avoid work anytime they wish.
Microsoft is an outspoken opponent of the cap on H-1B visas, which allow companies in the U.S. to employ certain foreign workers. Bill Gates claims the cap on H1B visas makes it difficult to hire employees for the company, stating "I'd certainly get rid of the H1B cap" in 2005. Critics of H1B visas argue that relaxing the limits would result in increased unemployment for U.S. citizens due to H1B workers working for lower salaries. The Human Rights Campaign Corporate Equality Index, a report of how progressive the organization deems company policies towards LGBT employees, rated Microsoft as 87% from 2002 to 2004 and as 100% from 2005 to 2010 after they allowed gender expression.
In August 2018, Microsoft implemented a policy for all companies providing subcontractors to require 12 weeks of paid parental leave to each employee. This expands on the former requirement from 2015 requiring 15 days of paid vacation and sick leave each year. In 2015, Microsoft established its own parental leave policy to allow 12 weeks off for parental leave with an additional 8 weeks for the parent who gave birth.
Environment
In 2011, Greenpeace released a report rating the top ten big brands in cloud computing on their sources of electricity for their data centers. At the time, data centers consumed up to 2% of all global electricity and this amount was projected to increase. Phil Radford of Greenpeace said "we are concerned that this new explosion in electricity use could lock us into old, polluting energy sources instead of the clean energy available today," and called on "Amazon, Microsoft and other leaders of the information-technology industry must embrace clean energy to power their cloud-based data centers." In 2013, Microsoft agreed to buy power generated by a Texas wind project to power one of its data centers. Microsoft is ranked on the 17th place in Greenpeace's Guide to Greener Electronics (16th Edition) that ranks 18 electronics manufacturers according to their policies on toxic chemicals, recycling and climate change. Microsoft's timeline for phasing out brominated flame retardant (BFRs) and phthalates in all products is 2012 but its commitment to phasing out PVC is not clear. As of January 2011, it has no products that are completely free from PVC and BFRs.
Microsoft's main U.S. campus received a silver certification from the Leadership in Energy and Environmental Design (LEED) program in 2008, and it installed over 2,000 solar panels on top of its buildings at its Silicon Valley campus, generating approximately 15 percent of the total energy needed by the facilities in April 2005. Microsoft makes use of alternative forms of transit. It created one of the world's largest private bus systems, the "Connector", to transport people from outside the company; for on-campus transportation, the "Shuttle Connect" uses a large fleet of hybrid cars to save fuel. The company also subsidizes regional public transport, provided by Sound Transit and King County Metro, as an incentive. In February 2010, however, Microsoft took a stance against adding additional public transport and high-occupancy vehicle (HOV) lanes to the State Route 520 and its floating bridge connecting Redmond to Seattle; the company did not want to delay the construction any further. Microsoft was ranked number 1 in the list of the World's Best Multinational Workplaces by the Great Place to Work Institute in 2011. In January 2020, the company promised the carbon dioxide removal of all carbon that it has emitted since its foundation in 1975. On October 9, 2020, Microsoft permanently allowed remote work.
In January 2021, the company announced on Twitter to join the Climate Neutral Data Centre Pact, which engages the cloud infrastructure and data centers industries to reach carbon neutrality in Europe by 2030.
Headquarters
The corporate headquarters, informally known as the Microsoft Redmond campus, is located at One Microsoft Way in Redmond, Washington. Microsoft initially moved onto the grounds of the campus on February 26, 1986, weeks before the company went public on March 13. The headquarters has since experienced multiple expansions since its establishment. It is estimated to encompass over 8 million ft2 (750,000 m2) of office space and 30,000–40,000 employees. Additional offices are located in Bellevue and Issaquah, Washington (90,000 employees worldwide). The company is planning to upgrade its Mountain View, California, campus on a grand scale. The company has occupied this campus since 1981. In 2016, the company bought the campus, with plans to renovate and expand it by 25%. Microsoft operates an East Coast headquarters in Charlotte, North Carolina.
Flagship stores
On October 26, 2015, the company opened its retail location on Fifth Avenue in New York City. The location features a five-story glass storefront and is 22,270 square feet. As per company executives, Microsoft had been on the lookout for a flagship location since 2009. The company's retail locations are part of a greater
strategy to help build a connection with its consumers. The
opening of the store coincided with the launch of the Surface Book and Surface
Pro 4. On November 12, 2015, Microsoft opened a second flagship store, located in Sydney's Pitt Street Mall.
Logo
Microsoft adopted the so-called "Pac-Man Logo," designed by Scott Baker, in 1987. Baker stated "The new logo, in Helvetica italic typeface, has a slash between the o and s to emphasize the "soft" part of the name and convey motion and speed." Dave Norris ran an internal joke campaign to save the old logo, which was green, in all uppercase, and featured a fanciful letter O, nicknamed the blibbet, but it was discarded. Microsoft's logo with the tagline "Your potential. Our passion."—below the main corporate name—is based on a slogan Microsoft used in 2008. In 2002, the company started using the logo in the United States and eventually started a television campaign with the slogan, changed from the previous tagline of "Where do you want to go today?" During the private MGX (Microsoft Global Exchange) conference in 2010, Microsoft unveiled the company's next tagline, "Be What's Next." They also had a slogan/tagline "Making it all make sense."
On August 23, 2012, Microsoft unveiled a new corporate logo at the opening of its 23rd Microsoft store in Boston, indicating the company's shift of focus from the classic style to the tile-centric modern interface, which it uses/will use on the Windows Phone platform, Xbox 360, Windows 8 and the upcoming Office Suites. The new logo also includes four squares with the colors of the then-current Windows logo which have been used to represent Microsoft's four major products: Windows (blue), Office (red), Xbox (green) and Bing (yellow). The logo resembles the opening of one of the commercials for Windows 95.
Sponsorship
The company was the official jersey sponsor of Finland's national basketball team at EuroBasket 2015.
The company was a Major sponsor of the Toyota Gazoo Racing WRT (2017-2020).
The company was a sponsor of the Renault F1 Team (2016-2020)
Philanthropy
During the COVID-19 pandemic, Microsoft's president, Brad Smith, announced that an initial batch of supplies, including 15,000 protection goggles, infrared thermometers, medical caps, and protective suits, were donated to Seattle, with further aid to come soon.
Criticism
Criticism of Microsoft has followed various aspects of its products and business practices. Frequently criticized are the ease of use, robustness, and security of the company's software. They've also been criticized for the use of permatemp employees (employees employed for years as "temporary," and therefore without medical benefits), the use of forced retention tactics, which means that employees would be sued if they tried to leave. Historically, Microsoft has also been accused of overworking employees, in many cases, leading to burnout within just a few years of joining the company. The company is often referred to as a "Velvet Sweatshop", a term which originated in a 1989 Seattle Times article, and later became used to describe the company by some of Microsoft's own employees. This characterization is derived from the perception that Microsoft provides nearly everything for its employees in a convenient place, but in turn overworks them to a point where it would be bad for their (possibly long-term) health.
"Embrace, extend, and extinguish" (EEE), also known as "embrace, extend, and exterminate", is a phrase that the U.S. Department of Justice found that was used internally by Microsoft to describe its strategy for entering product categories involving widely used standards, extending those standards with proprietary capabilities, and then using those differences to strongly disadvantage competitors. Microsoft is frequently accused of using anticompetitive tactics and abusing its monopolistic power. People who use their products and services often end up becoming dependent on them, a process known as vendor lock-in.
Microsoft was the first company to participate in the PRISM surveillance program, according to leaked NSA documents obtained by The Guardian and The Washington Post in June 2013, and acknowledged by government officials following the leak. The program authorizes the government to secretly access data of non-US citizens hosted by American companies without a warrant. Microsoft has denied participation in such a program.
Jesse Jackson believes Microsoft should hire more minorities and women. Jackson has urged other companies to diversify their workforce. He believes that Microsoft made some progress when it appointed two women to its board of directors in 2015.
Licensing arrangements for service providers
The Microsoft Services Provider License Agreement, or SPLA, is a mechanism by which service providers and independent software vendors (ISVs), who license Microsoft products on a monthly basis, are able to provide software services and hosting services to end-users. The SPLA can be customised to suit the solution being offered and the customers using it.
See also
List of Microsoft software
List of Microsoft hardware
List of investments by Microsoft Corporation
List of mergers and acquisitions by Microsoft
Microsoft engineering groups
Microsoft Enterprise Agreement
References
External links
1975 establishments in New Mexico
1980s initial public offerings
American brands
American companies established in 1975
Business software companies
Cloud computing providers
Companies based in Redmond, Washington
Companies in the Dow Jones Industrial Average
Companies in the NASDAQ-100
Companies in the PRISM network
Companies listed on the Nasdaq
Computer companies established in 1975
Computer hardware companies
CRM software companies
Electronics companies established in 1975
Electronics companies of the United States
ERP software companies
Mobile phone manufacturers
Multinational companies headquartered in the United States
Portmanteaus
Software companies based in Washington (state)
Software companies established in 1975
Software companies of the United States
Supply chain software companies
Technology companies established in 1975
Technology companies of the United States
Web service providers |
20087 | https://en.wikipedia.org/wiki/Modular%20arithmetic | Modular arithmetic | In mathematics, modular arithmetic is a system of arithmetic for integers, where numbers "wrap around" when reaching a certain value, called the modulus. The modern approach to modular arithmetic was developed by Carl Friedrich Gauss in his book Disquisitiones Arithmeticae, published in 1801.
A familiar use of modular arithmetic is in the 12-hour clock, in which the day is divided into two 12-hour periods. If the time is 7:00 now, then 8 hours later it will be 3:00. Simple addition would result in , but clocks "wrap around" every 12 hours. Because the hour number starts over after it reaches 12, this is arithmetic modulo 12. In terms of the definition below, 15 is congruent to 3 modulo 12, so "15:00" on a 24-hour clock is displayed "3:00" on a 12-hour clock.
Congruence
Given an integer , called a modulus, two integers and are said to be congruent modulo , if is a divisor of their difference (i.e., if there is an integer such that ).
Congruence modulo is a congruence relation, meaning that it is an equivalence relation that is compatible with the operations of addition, subtraction, and multiplication. Congruence modulo is denoted:
The parentheses mean that applies to the entire equation, not just to the right-hand side (here ). This notation is not to be confused with the notation (without parentheses), which refers to the modulo operation. Indeed, denotes the unique integer such that and (i.e., the remainder of when divided by ).
The congruence relation may be rewritten as
explicitly showing its relationship with Euclidean division. However, the here need not be the remainder of the division of by Instead, what the statement asserts is that and have the same remainder when divided by . That is,
where is the common remainder. Subtracting these two expressions, we recover the previous relation:
by setting
Examples
In modulus 12, one can assert that:
because , which is a multiple of 12. Another way to express this is to say that both 38 and 14 have the same remainder 2, when divided by 12.
The definition of congruence also applies to negative values. For example:
Properties
The congruence relation satisfies all the conditions of an equivalence relation:
Reflexivity:
Symmetry: if for all , , and .
Transitivity: If and , then
If and or if then:
for any integer (compatibility with translation)
for any integer (compatibility with scaling)
(compatibility with addition)
(compatibility with subtraction)
(compatibility with multiplication)
for any non-negative integer (compatibility with exponentiation)
, for any polynomial with integer coefficients (compatibility with polynomial evaluation)
If , then it is generally false that . However, the following is true:
If where is Euler's totient function, then —provided that is coprime with .
For cancellation of common terms, we have the following rules:
If , where is any integer, then
If and is coprime with , then
If , then
The modular multiplicative inverse is defined by the following rules:
Existence: there exists an integer denoted such that if and only if is coprime with . This integer is called a modular multiplicative inverse of modulo .
If and exists, then (compatibility with multiplicative inverse, and, if , uniqueness modulo )
If and is coprime to , then the solution to this linear congruence is given by
The multiplicative inverse may be efficiently computed by solving Bézout's equation for —using the Extended Euclidean algorithm.
In particular, if is a prime number, then is coprime with for every such that ; thus a multiplicative inverse exists for all that is not congruent to zero modulo .
Some of the more advanced properties of congruence relations are the following:
Fermat's little theorem: If is prime and does not divide , then .
Euler's theorem: If and are coprime, then , where is Euler's totient function
A simple consequence of Fermat's little theorem is that if is prime, then is the multiplicative inverse of . More generally, from Euler's theorem, if and are coprime, then .
Another simple consequence is that if where is Euler's totient function, then provided is coprime with .
Wilson's theorem: is prime if and only if .
Chinese remainder theorem: For any , and coprime , , there exists a unique such that and . In fact, where is the inverse of modulo and is the inverse of modulo .
Lagrange's theorem: The congruence , where is prime, and is a polynomial with integer coefficients such that , has at most roots.
Primitive root modulo : A number is a primitive root modulo if, for every integer coprime to , there is an integer such that . A primitive root modulo exists if and only if is equal to or , where is an odd prime number and is a positive integer. If a primitive root modulo exists, then there are exactly such primitive roots, where is the Euler's totient function.
Quadratic residue: An integer is a quadratic residue modulo , if there exists an integer such that . Euler's criterion asserts that, if is an odd prime, and is not a multiple of , then is a quadratic residue modulo if and only if
Congruence classes
Like any congruence relation, congruence modulo is an equivalence relation, and the equivalence class of the integer , denoted by , is the set }. This set, consisting of all the integers congruent to modulo , is called the congruence class, residue class, or simply residue of the integer modulo . When the modulus is known from the context, that residue may also be denoted .
Residue systems
Each residue class modulo may be represented by any one of its members, although we usually represent each residue class by the smallest nonnegative integer which belongs to that class (since this is the proper remainder which results from division). Any two members of different residue classes modulo are incongruent modulo . Furthermore, every integer belongs to one and only one residue class modulo .
The set of integers } is called the least residue system modulo . Any set of integers, no two of which are congruent modulo , is called a complete residue system modulo .
The least residue system is a complete residue system, and a complete residue system is simply a set containing precisely one representative of each residue class modulo . For example. the least residue system modulo 4 is {0, 1, 2, 3}. Some other complete residue systems modulo 4 include:
{1, 2, 3, 4}
{13, 14, 15, 16}
{−2, −1, 0, 1}
{−13, 4, 17, 18}
{−5, 0, 6, 21}
{27, 32, 37, 42}
Some sets which are not complete residue systems modulo 4 are:
{−5, 0, 6, 22}, since 6 is congruent to 22 modulo 4.
{5, 15}, since a complete residue system modulo 4 must have exactly 4 incongruent residue classes.
Reduced residue systems
Given the Euler's totient function , any set of integers that are relatively prime to and mutually incongruent under modulus is called a reduced residue system modulo . The set {5,15} from above, for example, is an instance of a reduced residue system modulo 4.
Integers modulo n
The set of all congruence classes of the integers for a modulus is called the ring of integers modulo , and is denoted , , or . The notation is, however, not recommended because it can be confused with the set of -adic integers. The ring is fundamental to various branches of mathematics (see below).
The set is defined for n > 0 as:
(When , is not an empty set; rather, it is isomorphic to , since }.)
We define addition, subtraction, and multiplication on by the following rules:
The verification that this is a proper definition uses the properties given before.
In this way, becomes a commutative ring. For example, in the ring , we have
as in the arithmetic for the 24-hour clock.
We use the notation because this is the quotient ring of by the ideal , a set containing all integers divisible by , where is the singleton set }. Thus is a field when is a maximal ideal (i.e., when is prime).
This can also be constructed from the group under the addition operation alone. The residue class is the group coset of in the quotient group , a cyclic group.
Rather than excluding the special case , it is more useful to include (which, as mentioned before, is isomorphic to the ring of integers). In fact, this inclusion is useful when discussing the characteristic of a ring.
The ring of integers modulo is a finite field if and only if is prime (this ensures that every nonzero element has a multiplicative inverse). If is a prime power with k > 1, there exists a unique (up to isomorphism) finite field with elements, but this is not , which fails to be a field because it has zero-divisors.
The multiplicative subgroup of integers modulo n is denoted by . This consists of (where a is coprime to n), which are precisely the classes possessing a multiplicative inverse. This forms a commutative group under multiplication, with order .
Applications
In theoretical mathematics, modular arithmetic is one of the foundations of number theory, touching on almost every aspect of its study, and it is also used extensively in group theory, ring theory, knot theory, and abstract algebra. In applied mathematics, it is used in computer algebra, cryptography, computer science, chemistry and the visual and musical arts.
A very practical application is to calculate checksums within serial number identifiers. For example, International Standard Book Number (ISBN) uses modulo 11 (for 10 digit ISBN) or modulo 10 (for 13 digit ISBN) arithmetic for error detection. Likewise, International Bank Account Numbers (IBANs), for example, make use of modulo 97 arithmetic to spot user input errors in bank account numbers. In chemistry, the last digit of the CAS registry number (a unique identifying number for each chemical compound) is a check digit, which is calculated by taking the last digit of the first two parts of the CAS registry number times 1, the previous digit times 2, the previous digit times 3 etc., adding all these up and computing the sum modulo 10.
In cryptography, modular arithmetic directly underpins public key systems such as RSA and Diffie–Hellman, and provides finite fields which underlie elliptic curves, and is used in a variety of symmetric key algorithms including Advanced Encryption Standard (AES), International Data Encryption Algorithm (IDEA), and RC4. RSA and Diffie–Hellman use modular exponentiation.
In computer algebra, modular arithmetic is commonly used to limit the size of integer coefficients in intermediate calculations and data. It is used in polynomial factorization, a problem for which all known efficient algorithms use modular arithmetic. It is used by the most efficient implementations of polynomial greatest common divisor, exact linear algebra and Gröbner basis algorithms over the integers and the rational numbers. As posted on Fidonet in the 1980s and archived at Rosetta Code, modular arithmetic was used to disprove Euler's sum of powers conjecture on a Sinclair QL microcomputer using just one-fourth of the integer precision used by a CDC 6600 supercomputer to disprove it two decades earlier via a brute force search.
In computer science, modular arithmetic is often applied in bitwise operations and other operations involving fixed-width, cyclic data structures. The modulo operation, as implemented in many programming languages and calculators, is an application of modular arithmetic that is often used in this context. The logical operator XOR sums 2 bits, modulo 2.
In music, arithmetic modulo 12 is used in the consideration of the system of twelve-tone equal temperament, where octave and enharmonic equivalency occurs (that is, pitches in a 1:2 or 2:1 ratio are equivalent, and C-sharp is considered the same as D-flat).
The method of casting out nines offers a quick check of decimal arithmetic computations performed by hand. It is based on modular arithmetic modulo 9, and specifically on the crucial property that 10 ≡ 1 (mod 9).
Arithmetic modulo 7 is used in algorithms that determine the day of the week for a given date. In particular, Zeller's congruence and the Doomsday algorithm make heavy use of modulo-7 arithmetic.
More generally, modular arithmetic also has application in disciplines such as law (e.g., apportionment), economics (e.g., game theory) and other areas of the social sciences, where proportional division and allocation of resources plays a central part of the analysis.
Computational complexity
Since modular arithmetic has such a wide range of applications, it is important to know how hard it is to solve a system of congruences. A linear system of congruences can be solved in polynomial time with a form of Gaussian elimination, for details see linear congruence theorem. Algorithms, such as Montgomery reduction, also exist to allow simple arithmetic operations, such as multiplication and exponentiation modulo , to be performed efficiently on large numbers.
Some operations, like finding a discrete logarithm or a quadratic congruence appear to be as hard as integer factorization and thus are a starting point for cryptographic algorithms and encryption. These problems might be NP-intermediate.
Solving a system of non-linear modular arithmetic equations is NP-complete.
Example implementations
Below are three reasonably fast C functions, two for performing modular multiplication and one for modular exponentiation on unsigned integers not larger than 63 bits, without overflow of the transient operations.
An algorithmic way to compute :
uint64_t mul_mod(uint64_t a, uint64_t b, uint64_t m)
{
if (!((a | b) & (0xFFFFFFFFULL << 32)))
return a * b % m;
uint64_t d = 0, mp2 = m >> 1;
int i;
if (a >= m) a %= m;
if (b >= m) b %= m;
for (i = 0; i < 64; ++i)
{
d = (d > mp2) ? (d << 1) - m : d << 1;
if (a & 0x8000000000000000ULL)
d += b;
if (d >= m) d -= m;
a <<= 1;
}
return d;
}
On computer architectures where an extended precision format with at least 64 bits of mantissa is available (such as the long double type of most x86 C compilers), the following routine is , by employing the trick that, by hardware, floating-point multiplication results in the most significant bits of the product kept, while integer multiplication results in the least significant bits kept:
uint64_t mul_mod(uint64_t a, uint64_t b, uint64_t m)
{
long double x;
uint64_t c;
int64_t r;
if (a >= m) a %= m;
if (b >= m) b %= m;
x = a;
c = x * b / m;
r = (int64_t)(a * b - c * m) % (int64_t)m;
return r < 0 ? r + m : r;
}
Below is a C function for performing modular exponentiation, that uses the function implemented above.
An algorithmic way to compute :
uint64_t pow_mod(uint64_t a, uint64_t b, uint64_t m)
{
uint64_t r = m==1?0:1;
while (b > 0) {
if (b & 1)
r = mul_mod(r, a, m);
b = b >> 1;
a = mul_mod(a, a, m);
}
return r;
}
However, for all above routines to work, must not exceed 63 bits.
See also
Boolean ring
Circular buffer
Division (mathematics)
Finite field
Legendre symbol
Modular exponentiation
Modulo (mathematics)
Multiplicative group of integers modulo n
Pisano period (Fibonacci sequences modulo n)
Primitive root modulo n
Quadratic reciprocity
Quadratic residue
Rational reconstruction (mathematics)
Reduced residue system
Serial number arithmetic (a special case of modular arithmetic)
Two-element Boolean algebra
Topics relating to the group theory behind modular arithmetic:
Cyclic group
Multiplicative group of integers modulo n
Other important theorems relating to modular arithmetic:
Carmichael's theorem
Chinese remainder theorem
Euler's theorem
Fermat's little theorem (a special case of Euler's theorem)
Lagrange's theorem
Thue's lemma
Notes
References
John L. Berggren. "modular arithmetic". Encyclopædia Britannica.
. See in particular chapters 5 and 6 for a review of basic modular arithmetic.
Maarten Bullynck "Modular Arithmetic before C.F. Gauss. Systematisations and discussions on remainder problems in 18th-century Germany"
Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to Algorithms, Second Edition. MIT Press and McGraw-Hill, 2001. . Section 31.3: Modular arithmetic, pp. 862–868.
Anthony Gioia, Number Theory, an Introduction Reprint (2001) Dover. .
External links
In this modular art article, one can learn more about applications of modular arithmetic in art.
An article on modular arithmetic on the GIMPS wiki
Modular Arithmetic and patterns in addition and multiplication tables
Finite rings
Group theory
Articles with example C code |
20268 | https://en.wikipedia.org/wiki/Microsoft%20Excel | Microsoft Excel | Microsoft Excel is a spreadsheet developed by Microsoft for Windows, macOS, Android and iOS. It features calculation or computation capabilities, graphing tools, pivot tables, and a macro programming language called Visual Basic for Applications (VBA). Excel forms part of the Microsoft Office suite of software.
Features
Basic operation
Microsoft Excel has the basic features of all spreadsheets, using a grid of cells arranged in numbered rows and letter-named columns to organize data manipulations like arithmetic operations. It has a battery of supplied functions to answer statistical, engineering, and financial needs. In addition, it can display data as line graphs, histograms and charts, and with a very limited three-dimensional graphical display. It allows sectioning of data to view its dependencies on various factors for different perspectives (using pivot tables and the scenario manager).> A PivotTable is a tool for data analysis. It does this by simplifying large data sets via PivotTable fields It has a programming aspect, Visual Basic for Applications, allowing the user to employ a wide variety of numerical methods, for example, for solving differential equations of mathematical physics, and then reporting the results back to the spreadsheet. It also has a variety of interactive features allowing user interfaces that can completely hide the spreadsheet from the user, so the spreadsheet presents itself as a so-called application, or decision support system (DSS), via a custom-designed user interface, for example, a stock analyzer, or in general, as a design tool that asks the user questions and provides answers and reports. In a more elaborate realization, an Excel application can automatically poll external databases and measuring instruments using an update schedule, analyze the results, make a Word report or PowerPoint slide show, and e-mail these presentations on a regular basis to a list of participants. Excel was not designed to be used as a database.
Microsoft allows for a number of optional command-line switches to control the manner in which Excel starts.
Functions
Excel 2016 has 484 functions. Of these, 360 existed prior to Excel 2010. Microsoft classifies these functions in 14 categories. Of the 484 current functions, 386 may be called from VBA as methods of the object "WorksheetFunction" and 44 have the same names as VBA functions.
With the introduction of LAMBDA, Excel will become Turing complete.
Macro programming
VBA programming
The Windows version of Excel supports programming through Microsoft's Visual Basic for Applications (VBA), which is a dialect of Visual Basic. Programming with VBA allows spreadsheet manipulation that is awkward or impossible with standard spreadsheet techniques. Programmers may write code directly using the Visual Basic Editor (VBE), which includes a window for writing code, debugging code, and code module organization environment. The user can implement numerical methods as well as automating tasks such as formatting or data organization in VBA and guide the calculation using any desired intermediate results reported back to the spreadsheet.
VBA was removed from Mac Excel 2008, as the developers did not believe that a timely release would allow porting the VBA engine natively to Mac OS X. VBA was restored in the next version, Mac Excel 2011, although the build lacks support for ActiveX objects, impacting some high level developer tools.
A common and easy way to generate VBA code is by using the Macro Recorder. The Macro Recorder records actions of the user and generates VBA code in the form of a macro. These actions can then be repeated automatically by running the macro. The macros can also be linked to different trigger types like keyboard shortcuts, a command button or a graphic. The actions in the macro can be executed from these trigger types or from the generic toolbar options. The VBA code of the macro can also be edited in the VBE. Certain features such as loop functions and screen prompt by their own properties, and some graphical display items, cannot be recorded but must be entered into the VBA module directly by the programmer. Advanced users can employ user prompts to create an interactive program, or react to events such as sheets being loaded or changed.
Macro Recorded code may not be compatible with Excel versions. Some code that is used in Excel 2010 cannot be used in Excel 2003. Making a Macro that changes the cell colors and making changes to other aspects of cells may not be backward compatible.
VBA code interacts with the spreadsheet through the Excel Object Model, a vocabulary identifying spreadsheet objects, and a set of supplied functions or methods that enable reading and writing to the spreadsheet and interaction with its users (for example, through custom toolbars or command bars and message boxes). User-created VBA subroutines execute these actions and operate like macros generated using the macro recorder, but are more flexible and efficient.
History
From its first version Excel supported end-user programming of macros (automation of repetitive tasks) and user-defined functions (extension of Excel's built-in function library). In early versions of Excel, these programs were written in a macro language whose statements had formula syntax and resided in the cells of special-purpose macro sheets (stored with file extension .XLM in Windows.) XLM was the default macro language for Excel through Excel 4.0. Beginning with version 5.0 Excel recorded macros in VBA by default but with version 5.0 XLM recording was still allowed as an option. After version 5.0 that option was discontinued. All versions of Excel, including Excel 2010 are capable of running an XLM macro, though Microsoft discourages their use.
Charts
Excel supports charts, graphs, or histograms generated from specified groups of cells. It also supports Pivot Charts that allow for a chart to be linked directly to a Pivot table. This allows the chart to be refreshed with the Pivot Table. The generated graphic component can either be embedded within the current sheet or added as a separate object.
These displays are dynamically updated if the content of cells changes. For example, suppose that the important design requirements are displayed visually; then, in response to a user's change in trial values for parameters, the curves describing the design change shape, and their points of intersection shift, assisting the selection of the best design.
Add-ins
Additional features are available using add-ins. Several are provided with Excel, including:
Analysis ToolPak: Provides data analysis tools for statistical and engineering analysis (includes analysis of variance and regression analysis)
Analysis ToolPak VBA: VBA functions for Analysis ToolPak
Euro Currency Tools: Conversion and formatting for euro currency
Solver Add-In: Tools for optimization and equation solving
Excel for the web
Excel for the web is a free lightweight version of Microsoft Excel available as part of Office on the web, which also includes web versions of Microsoft Word and Microsoft PowerPoint.
Excel for the web can display most of the features available in the desktop versions of Excel, although it may not be able to insert or edit them. Certain data connections are not accessible on Excel for the web, including with charts that may use these external connections. Excel for the web also cannot display legacy features, such as Excel 4.0 macros or Excel 5.0 dialog sheets. There are also small differences between how some of the Excel functions work.
Data storage and communication
Number of rows and columns
Versions of Excel up to 7.0 had a limitation in the size of their data sets of 16K (214 = ) rows. Versions 8.0 through 11.0 could handle 64K (216 = ) rows and 256 columns (28 as label 'IV'). Version 12.0 onwards, including the current Version 16.x, can handle over 1M (220 = ) rows, and (214, labeled as column 'XFD') columns.
File formats
Microsoft Excel up until 2007 version used a proprietary binary file format called Excel Binary File Format (.XLS) as its primary format. Excel 2007 uses Office Open XML as its primary file format, an XML-based format that followed after a previous XML-based format called "XML Spreadsheet" ("XMLSS"), first introduced in Excel 2002.
Although supporting and encouraging the use of new XML-based formats as replacements, Excel 2007 remained backwards-compatible with the traditional, binary formats. In addition, most versions of Microsoft Excel can read CSV, DBF, SYLK, DIF, and other legacy formats. Support for some older file formats was removed in Excel 2007. The file formats were mainly from DOS-based programs.
Binary
OpenOffice.org has created documentation of the Excel format. Two epochs of the format exist: the 97-2003 OLE format, and the older stream format. Microsoft has made the Excel binary format specification available to freely download.
XML Spreadsheet
The XML Spreadsheet format introduced in Excel 2002 is a simple, XML based format missing some more advanced features like storage of VBA macros. Though the intended file extension for this format is .xml, the program also correctly handles XML files with .xls extension. This feature is widely used by third-party applications (e.g. MySQL Query Browser) to offer "export to Excel" capabilities without implementing binary file format. The following example will be correctly opened by Excel if saved either as Book1.xml or Book1.xls:
<?xml version="1.0"?>
<Workbook xmlns="urn:schemas-microsoft-com:office:spreadsheet"
xmlns:o="urn:schemas-microsoft-com:office:office"
xmlns:x="urn:schemas-microsoft-com:office:excel"
xmlns:ss="urn:schemas-microsoft-com:office:spreadsheet"
xmlns:html="http://www.w3.org/TR/REC-html40">
<Worksheet ss:Name="Sheet1">
<Table ss:ExpandedColumnCount="2" ss:ExpandedRowCount="2" x:FullColumns="1" x:FullRows="1">
<Row>
<Cell><Data ss:Type="String">Name</Data></Cell>
<Cell><Data ss:Type="String">Example</Data></Cell>
</Row>
<Row>
<Cell><Data ss:Type="String">Value</Data></Cell>
<Cell><Data ss:Type="Number">123</Data></Cell>
</Row>
</Table>
</Worksheet>
</Workbook>
Current file extensions
Microsoft Excel 2007, along with the other products in the Microsoft Office 2007 suite, introduced new file formats. The first of these (.xlsx) is defined in the Office Open XML (OOXML) specification.
Old file extensions
Using other Windows applications
Windows applications such as Microsoft Access and Microsoft Word, as well as Excel can communicate with each other and use each other's capabilities. The most common are Dynamic Data Exchange: although strongly deprecated by Microsoft, this is a common method to send data between applications running on Windows, with official MS publications referring to it as "the protocol from hell". As the name suggests, it allows applications to supply data to others for calculation and display. It is very common in financial markets, being used to connect to important financial data services such as Bloomberg and Reuters.
OLE Object Linking and Embedding allows a Windows application to control another to enable it to format or calculate data. This may take on the form of "embedding" where an application uses another to handle a task that it is more suited to, for example a PowerPoint presentation may be embedded in an Excel spreadsheet or vice versa.
Using external data
Excel users can access external data sources via Microsoft Office features such as (for example) connections built with the Office Data Connection file format. Excel files themselves may be updated using a Microsoft supplied ODBC driver.
Excel can accept data in real-time through several programming interfaces, which allow it to communicate with many data sources such as Bloomberg and Reuters (through addins such as Power Plus Pro).
DDE: "Dynamic Data Exchange" uses the message passing mechanism in Windows to allow data to flow between Excel and other applications. Although it is easy for users to create such links, programming such links reliably is so difficult that Microsoft, the creators of the system, officially refer to it as "the protocol from hell". In spite of its many issues DDE remains the most common way for data to reach traders in financial markets.
Network DDE Extended the protocol to allow spreadsheets on different computers to exchange data. Starting with Windows Vista, Microsoft no longer supports the facility.
Real Time Data: RTD although in many ways technically superior to DDE, has been slow to gain acceptance, since it requires non-trivial programming skills, and when first released was neither adequately documented nor supported by the major data vendors.
Alternatively, Microsoft Query provides ODBC-based browsing within Microsoft Excel.
Export and migration of spreadsheets
Programmers have produced APIs to open Excel spreadsheets in a variety of applications and environments other than Microsoft Excel. These include opening Excel documents on the web using either ActiveX controls, or plugins like the Adobe Flash Player. The Apache POI opensource project provides Java libraries for reading and writing Excel spreadsheet files. ExcelPackage is another open-source project that provides server-side generation of Microsoft Excel 2007 spreadsheets. PHPExcel is a PHP library that converts Excel5, Excel 2003, and Excel 2007 formats into objects for reading and writing within a web application. Excel Services is a current .NET developer tool that can enhance Excel's capabilities. Excel spreadsheets can be accessed from Python with xlrd and openpyxl. js-xlsx and js-xls can open Excel spreadsheets from JavaScript.
Password protection
Microsoft Excel protection offers several types of passwords:
Password to open a document
Password to modify a document
Password to unprotect the worksheet
Password to protect workbook
Password to protect the sharing workbook
All passwords except password to open a document can be removed instantly regardless of the Microsoft Excel version used to create the document. These types of passwords are used primarily for shared work on a document. Such password-protected documents are not encrypted, and a data sources from a set password is saved in a document's header. Password to protect workbook is an exception – when it is set, a document is encrypted with the standard password “VelvetSweatshop”, but since it is known to the public, it actually does not add any extra protection to the document. The only type of password that can prevent a trespasser from gaining access to a document is password to open a document. The cryptographic strength of this kind of protection depends strongly on the Microsoft Excel version that was used to create the document.
In Microsoft Excel 95 and earlier versions, the password to open is converted to a 16-bit key that can be instantly cracked. In Excel 97/2000 the password is converted to a 40-bit key, which can also be cracked very quickly using modern equipment. As regards services that use rainbow tables (e.g. Password-Find), it takes up to several seconds to remove protection. In addition, password-cracking programs can brute-force attack passwords at a rate of hundreds of thousands of passwords a second, which not only lets them decrypt a document but also find the original password.
In Excel 2003/XP the encryption is slightly better – a user can choose any encryption algorithm that is available in the system (see Cryptographic Service Provider). Due to the CSP, an Excel file can't be decrypted, and thus the password to open can't be removed, though the brute-force attack speed remains quite high. Nevertheless, the older Excel 97/2000 algorithm is set by the default. Therefore, users who do not change the default settings lack reliable protection of their documents.
The situation changed fundamentally in Excel 2007, where the modern AES algorithm with a key of 128 bits started being used for decryption, and a 50,000-fold use of the hash function SHA1 reduced the speed of brute-force attacks down to hundreds of passwords per second. In Excel 2010, the strength of the protection by the default was increased two times due to the use of a 100,000-fold SHA1 to convert a password to a key.
Microsoft Excel Viewer
Microsoft Excel Viewer was a freeware program for Microsoft Windows for viewing and printing spreadsheet documents created by Excel. Microsoft retired the viewer in April 2018 with the last security update released in February 2019 for Excel Viewer 2007 (SP3).
The first version released by Microsoft was Excel 97 Viewer. Excel 97 Viewer was supported in Windows CE for Handheld PCs. In October 2004, Microsoft released Excel Viewer 2003. In September 2007, Microsoft released Excel Viewer 2003 Service Pack 3 (SP3). In January 2008, Microsoft released Excel Viewer 2007 (featuring a non-collapsible Ribbon interface). In April 2009, Microsoft released Excel Viewer 2007 Service Pack 2 (SP2). In October 2011, Microsoft released Excel Viewer 2007 Service Pack 3 (SP3).
Microsoft advises to view and print Excel files for free to use the Excel Mobile application for Windows 10 and for Windows 7 and Windows 8 to upload the file to OneDrive and use Excel for the web with a Microsoft account to open them in a browser.
Quirks
In addition to issues with spreadsheets in general, other problems specific to Excel include numeric precision, misleading statistics functions, mod function errors, date limitations and more.
Numeric precision
Despite the use of 15-figure precision, Excel can display many more figures (up to thirty) upon user request. But the displayed figures are not those actually used in its computations, and so, for example, the difference of two numbers may differ from the difference of their displayed values. Although such departures are usually beyond the 15th decimal, exceptions do occur, especially for very large or very small numbers. Serious errors can occur if decisions are made based upon automated comparisons of numbers (for example, using the Excel If function), as equality of two numbers can be unpredictable.
In the figure, the fraction 1/9000 is displayed in Excel. Although this number has a decimal representation that is an infinite string of ones, Excel displays only the leading 15 figures. In the second line, the number one is added to the fraction, and again Excel displays only 15 figures. In the third line, one is subtracted from the sum using Excel. Because the sum in the second line has only eleven 1's after the decimal, the difference when 1 is subtracted from this displayed value is three 0's followed by a string of eleven 1's. However, the difference reported by Excel in the third line is three 0's followed by a string of thirteen 1's and two extra erroneous digits. This is because Excel calculates with about half a digit more than it displays.
Excel works with a modified 1985 version of the IEEE 754 specification. Excel's implementation involves conversions between binary and decimal representations, leading to accuracy that is on average better than one would expect from simple fifteen digit precision, but that can be worse. See the main article for details.
Besides accuracy in user computations, the question of accuracy in Excel-provided functions may be raised. Particularly in the arena of statistical functions, Excel has been criticized for sacrificing accuracy for speed of calculation.
As many calculations in Excel are executed using VBA, an additional issue is the accuracy of VBA, which varies with variable type and user-requested precision.
Statistical functions
The accuracy and convenience of statistical tools in Excel has been criticized, as mishandling missing data, as returning incorrect values due to inept handling of round-off and large numbers, as only selectively updating calculations on a spreadsheet when some cell values are changed, and as having a limited set of statistical tools. Microsoft has announced some of these issues are addressed in Excel 2010.
Excel MOD function error
Excel has issues with modulo operations. In the case of excessively large results, Excel will return the error warning instead of an answer.
Fictional leap day in the year 1900
Excel includes February 29, 1900, incorrectly treating 1900 as a leap year, even though e.g. 2100 is correctly treated as a non-leap year. The bug originated from Lotus 1-2-3 (deliberately implemented to save computer memory), and was also purposely implemented in Excel, for the purpose of bug compatibility. This legacy has later been carried over into Office Open XML file format.
Thus a (not necessarily whole) number greater than or equal to 61 interpreted as a date and time are the (real) number of days after December 30, 1899, 0:00, a non-negative number less than 60 is the number of days after December 31, 1899, 0:00, and numbers with whole part 60 represent the fictional day.
Date range
Excel supports dates with years in the range 1900–9999, except that December 31, 1899, can be entered as 0 and is displayed as 0-jan-1900.
Converting a fraction of a day into hours, minutes and days by treating it as a moment on the day January 1, 1900, does not work for a negative fraction.
Conversion problems
Entering text that happens to be in a form that is interpreted as a date, the text can be unintentionally changed to a standard date format. A similar problem occurs when a text happens to be in the form of a floating-point notation of a number. In these cases the original exact text cannot be recovered from the result. Formatting the cell as TEXT before entering ambiguous text prevents Excel from converting to a date.
This issue has caused a well known problem in the analysis of DNA, for example in bioinformatics. As first reported in 2004, genetic scientists found that Excel automatically and incorrectly converts certain gene names into dates. A follow-up study in 2016 found many peer reviewed scientific journal papers had been affected and that "Of the selected journals, the proportion of published articles with Excel files containing gene lists that are affected by gene name errors is 19.6 %." Excel parses the copied and pasted data and sometimes changes them depending on what it thinks they are. For example, MARCH1 (Membrane Associated Ring-CH-type finger 1) gets converted to the date March 1 (1-Mar) and SEPT2 (Septin 2) is converted into September 2 (2-Sep) etc. While some secondary news sources reported this as a fault with Excel, the original authors of the 2016 paper placed the blame with the researchers misusing Excel.
In August 2020 the HUGO Gene Nomenclature Committee (HGNC) published new guidelines in the journal Nature regarding gene naming in order to avoid issues with "symbols that affect data handling and retrieval." So far 27 genes have been renamed, including changing MARCH1 to MARCHF1 and SEPT1 to SEPTIN1 in order to avoid accidental conversion of the gene names into dates.
Errors with large strings
The following functions return incorrect results when passed a string longer than 255 characters:
incorrectly returns 16, meaning "Error value"
, when called as a method of the VBA object (i.e., in VBA), incorrectly returns "false".
Filenames
Microsoft Excel will not open two documents with the same name and instead will display the following error:
A document with the name '%s' is already open. You cannot open two documents with the same name, even if the documents are in different folders. To open the second document, either close the document that is currently open, or rename one of the documents.
The reason is for calculation ambiguity with linked cells. If there is a cell ='[Book1.xlsx]Sheet1'!$G$33, and there are two books named "Book1" open, there is no way to tell which one the user means.
Versions
Early history
Microsoft originally marketed a spreadsheet program called Multiplan in 1982. Multiplan became very popular on CP/M systems, but on MS-DOS systems it lost popularity to Lotus 1-2-3. Microsoft released the first version of Excel for the Macintosh on September 30, 1985, and the first Windows version was 2.05 (to synchronize with the Macintosh version 2.2) in November 1987. Lotus was slow to bring 1-2-3 to Windows and by the early 1990s, Excel had started to outsell 1-2-3 and helped Microsoft achieve its position as a leading PC software developer. This accomplishment solidified Microsoft as a valid competitor and showed its future of developing GUI software. Microsoft maintained its advantage with regular new releases, every two years or so.
Microsoft Windows
Excel 2.0 is the first version of Excel for the Intel platform. Versions prior to 2.0 were only available on the Apple Macintosh.
Excel 2.0 (1987)
The first Windows version was labeled "2" to correspond to the Mac version. This included a run-time version of Windows.
BYTE in 1989 listed Excel for Windows as among the "Distinction" winners of the BYTE Awards. The magazine stated that the port of the "extraordinary" Macintosh version "shines", with a user interface as good as or better than the original.
Excel 3.0 (1990)
Included toolbars, drawing capabilities, outlining, add-in support, 3D charts, and many more new features.
Excel 4.0 (1992)
Introduced auto-fill.
Also, an easter egg in Excel 4.0 reveals a hidden animation of a dancing set of numbers 1 through 3, representing Lotus 1-2-3, which is then crushed by an Excel logo.
Excel 5.0 (1993)
With version 5.0, Excel has included Visual Basic for Applications (VBA), a programming language based on Visual Basic which adds the ability to automate tasks in Excel and to provide user-defined functions (UDF) for use in worksheets. VBA includes a fully featured integrated development environment (IDE). Macro recording can produce VBA code replicating user actions, thus allowing simple automation of regular tasks. VBA allows the creation of forms and in‑worksheet controls to communicate with the user. The language supports use (but not creation) of ActiveX (COM) DLL's; later versions add support for class modules allowing the use of basic object-oriented programming techniques.
The automation functionality provided by VBA made Excel a target for macro viruses. This caused serious problems until antivirus products began to detect these viruses. Microsoft belatedly took steps to prevent the misuse by adding the ability to disable macros completely, to enable macros when opening a workbook or to trust all macros signed using a trusted certificate.
Versions 5.0 to 9.0 of Excel contain various Easter eggs, including a "Hall of Tortured Souls", a Doom-like minigame, although since version 10 Microsoft has taken measures to eliminate such undocumented features from their products.
5.0 was released in a 16-bit x86 version for Windows 3.1 and later in a 32-bit version for NT 3.51 (x86/Alpha/PowerPC)
Excel 95 (v7.0)
Released in 1995 with Microsoft Office for Windows 95, this is the first major version after Excel 5.0, as there is no Excel 6.0 with all of the Office applications standardizing on the same major version number.
Internal rewrite to 32-bits. Almost no external changes, but faster and more stable.
Excel 97 (v8.0)
Included in Office 97 (for x86 and Alpha). This was a major upgrade that introduced the paper clip office assistant and featured standard VBA used instead of internal Excel Basic. It introduced the now-removed Natural Language labels.
This version of Excel includes a flight simulator as an Easter Egg.
Excel 2000 (v9.0)
Included in Office 2000. This was a minor upgrade but introduced an upgrade to the clipboard where it can hold multiple objects at once. The Office Assistant, whose frequent unsolicited appearance in Excel 97 had annoyed many users, became less intrusive.
Excel 2002 (v10.0)
Included in Office XP. Very minor enhancements.
Excel 2003 (v11.0)
Included in Office 2003. Minor enhancements, the most significant being the new Tables.
Excel 2007 (v12.0)
Included in Office 2007. This release was a major upgrade from the previous version. Similar to other updated Office products, Excel in 2007 used the new Ribbon menu system. This was different from what users were used to, and was met with mixed reactions. One study reported fairly good acceptance by users except highly experienced users and users of word processing applications with a classical WIMP interface, but was less convinced in terms of efficiency and organization. However, an online survey reported that a majority of respondents had a negative opinion of the change, with advanced users being "somewhat more negative" than intermediate users, and users reporting a self-estimated reduction in productivity.
Added functionality included the SmartArt set of editable business diagrams. Also added was an improved management of named variables through the Name Manager, and much-improved flexibility in formatting graphs, which allow (x, y) coordinate labeling and lines of arbitrary weight. Several improvements to pivot tables were introduced.
Also like other office products, the Office Open XML file formats were introduced, including .xlsm for a workbook with macros and .xlsx for a workbook without macros.
Specifically, many of the size limitations of previous versions were greatly increased. To illustrate, the number of rows was now 1,048,576 (220) and columns was 16,384 (214; the far-right column is XFD). This changes what is a valid A1 reference versus a named range. This version made more extensive use of multiple cores for the calculation of spreadsheets; however, VBA macros are not handled in parallel and XLL add‑ins were only executed in parallel if they were thread-safe and this was indicated at registration.
Excel 2010 (v14.0)
Included in Office 2010, this is the next major version after v12.0, as version number 13 was skipped.
Minor enhancements and 64-bit support, including the following:
Multi-threading recalculation (MTR) for commonly used functions
Improved pivot tables
More conditional formatting options
Additional image editing capabilities
In-cell charts called sparklines
Ability to preview before pasting
Office 2010 backstage feature for document-related tasks
Ability to customize the Ribbon
Many new formulas, most highly specialized to improve accuracy
Excel 2013 (v15.0)
Included in Office 2013, along with a lot of new tools included in this release:
Improved Multi-threading and Memory Contention
FlashFill
Power View
Power Pivot
Timeline Slicer
Windows App
Inquire
50 new functions
Excel 2016 (v16.0)
Included in Office 2016, along with a lot of new tools included in this release:
Power Query integration
Read-only mode for Excel
Keyboard access for Pivot Tables and Slicers in Excel
New Chart Types
Quick data linking in Visio
Excel forecasting functions
Support for multi-selection of Slicer items using touch
Time grouping and Pivot Chart Drill Down
Excel data cards
Excel 2019, Office 365 and subsequent (v16.0)
Microsoft no longer releases Office or Excel in discrete versions. Instead, features are introduced automatically over time using Windows Update. The version number remains 16.0. Thereafter only the approximate dates when features appear can now be given.
Dynamic Arrays. These are essentially Array Formulas but they "Spill" automatically into neighboring cells and does not need the ctrl-shift-enter to create them. Further, dynamic arrays are the default format, with new "@" and "#" operators to provide compatibility with previous versions. This is perhaps the biggest structural change since 2007, and is in response to a similar feature in Google Sheets. Dynamic arrays started appearing in pre-releases about 2018, and as of March 2020 are available in published versions of Office 365 provided a user selected "Office Insiders".
Apple Macintosh
1985 Excel 1.0
1988 Excel 1.5
1989 Excel 2.2
1990 Excel 3.0
1992 Excel 4.0
1993 Excel 5.0 (part of Office 4.x—Final Motorola 680x0 version and first PowerPC version)
1998 Excel 8.0 (part of Office 98)
2000 Excel 9.0 (part of Office 2001)
2001 Excel 10.0 (part of Office v. X)
2004 Excel 11.0 (part of Office 2004)
2008 Excel 12.0 (part of Office 2008)
2010 Excel 14.0 (part of Office 2011)
2015 Excel 15.0 (part of Office 2016—Office 2016 for Mac brings the Mac version much closer to parity with its Windows cousin, harmonizing many of the reporting and high-level developer functions, while bringing the ribbon and styling into line with its PC counterpart.)
OS/2
1989 Excel 2.2
1990 Excel 2.3
1991 Excel 3.0
Mobile
Excel Mobile is a spreadsheet program that can edit XLSX files. It can edit and format text in cells, calculate formulas, search within the spreadsheet, sort rows and columns, freeze panes, filter the columns, add comments, and create charts. It can't add columns or rows except at the edge of the document, rearrange columns or rows, delete rows or columns, or add spreadsheet tabs. The 2007 version has the ability to use a full-screen mode to deal with limited screen resolution, as well as split panes to view different parts of a worksheet at one time. Protection settings, zoom settings, autofilter settings, certain chart formatting, hidden sheets, and other features are not supported on Excel Mobile, and will be modified upon opening and saving a workbook. In 2015, Excel Mobile became available for Windows 10 and Windows 10 Mobile on Windows Store.
Summary
Impact
Excel offers many user interface tweaks over the earliest electronic spreadsheets; however, the essence remains the same as in the original spreadsheet software, VisiCalc: the program displays cells organized in rows and columns, and each cell may contain data or a formula, with relative or absolute references to other cells.
Excel 2.0 for Windows, which was modeled after its Mac GUI-based counterpart, indirectly expanded the installed base of the then-nascent Windows environment. Excel 2.0 was released a month before Windows 2.0, and the installed base of Windows was so low at that point in 1987 that Microsoft had to bundle a runtime version of Windows 1.0 with Excel 2.0. Unlike Microsoft Word, there never was a DOS version of Excel.
Excel became the first spreadsheet to allow the user to define the appearance of spreadsheets (fonts, character attributes, and cell appearance). It also introduced intelligent cell re-computation, where only cells dependent on the cell being modified are updated (previous spreadsheet programs recomputed everything all the time or waited for a specific user command). Excel introduced auto-fill, the ability to drag and expand the selection box to automatically copy a cell or row contents to adjacent cells or rows, adjusting the copies intelligently by automatically incrementing cell references or contents. Excel also introduced extensive graphing capabilities.
Security
Because Excel is widely used, it has been attacked by hackers. While Excel is not directly exposed to the Internet, if an attacker can get a victim to open a file in Excel, and there is an appropriate security bug in Excel, then the attacker can gain control of the victim's computer. UK's GCHQ has a tool named TORNADO ALLEY with this purpose.
See also
Comparison of spreadsheet software
Comparison of risk analysis Microsoft Excel add-ins
Numbers (spreadsheet)—the iWork equivalent
Spreadmart
References
General sources
External links
– official site
1985 software
Articles with example code
Classic Mac OS software
Computer-related introductions in 1985
Excel
Spreadsheet software for macOS
Spreadsheet software for Windows |
20287 | https://en.wikipedia.org/wiki/Microsoft%20Word | Microsoft Word | Microsoft Word is a word processing software developed by Microsoft. It was first released on October 25, 1983, under the name Multi-Tool Word for Xenix systems. Subsequent versions were later written for several other platforms including IBM PCs running DOS (1983), Apple Macintosh running the Classic Mac OS (1985), AT&T UNIX PC (1985), Atari ST (1988), OS/2 (1989), Microsoft Windows (1989), SCO Unix (1990), and macOS (2001).
Commercial versions of Word are licensed as a standalone product or as a component of Microsoft Office 365, or Microsoft 365 Premium subscription, Windows RT or the discontinued Microsoft Works suite.
History
Origins
In 1981, Microsoft hired Charles Simonyi, the primary developer of Bravo, the first GUI word processor, which was developed at Xerox PARC. Simonyi started work on a word processor called Multi-Tool Word and soon hired Richard Brodie, a former Xerox intern, who became the primary software engineer.
Microsoft announced Multi-Tool Word for Xenix and MS-DOS in 1983. Its name was soon simplified to Microsoft Word. Free demonstration copies of the application were bundled with the November 1983 issue of PC World, making it the first to be distributed on-disk with a magazine. That year Microsoft demonstrated Word running on Windows.
Unlike most MS-DOS programs at the time, Microsoft Word was designed to be used with a mouse. Advertisements depicted the Microsoft Mouse and described Word as a WYSIWYG, windowed word processor with the ability to undo and display bold, italic, and underlined text, although it could not render fonts. It was not initially popular, since its user interface was different from the leading word processor at the time, WordStar. However, Microsoft steadily improved the product, releasing versions 2.0 through 5.0 over the next six years. In 1985, Microsoft ported Word to the classic Mac OS (known as Macintosh System Software at the time). This was made easier by Word for DOS having been designed for use with high-resolution displays and laser printers, even though none were yet available to the general public. It was also notable for its very fast cut-and-paste function and unlimited number of undo operations, which are due to its usage of the piece table data structure.
Following the precedents of LisaWrite and MacWrite, Word for Mac OS added true WYSIWYG features. It fulfilled a need for a word processor that was more capable than MacWrite. After its release, Word for Mac OS's sales were higher than its MS-DOS counterpart for at least four years.
The second release of Word for Mac OS, shipped in 1987, was named Word 3.0 to synchronize its version number with Word for DOS; this was Microsoft's first attempt to synchronize version numbers across platforms. Word 3.0 included numerous internal enhancements and new features, including the first implementation of the Rich Text Format (RTF) specification, but was plagued with bugs. Within a few months, Word 3.0 was superseded by a more stable Word 3.01, which was mailed free to all registered users of 3.0. After MacWrite Pro was discontinued in the mid-1990s, Word for Mac OS never had any serious rivals. Word 5.1 for Mac OS, released in 1992, was a very popular word processor owing to its elegance, relative ease of use and feature set. Many users say it is the best version of Word for Mac OS ever created.
In 1986, an agreement between Atari and Microsoft brought Word to the Atari ST under the name Microsoft Write. The Atari ST version was a port of Word 1.05 for the Mac OS and was never updated.
The first version of Word for Windows was released in 1989. With the release of Windows 3.0 the following year, sales began to pick up and Microsoft soon became the market leader for word processors for IBM PC-compatible computers. In 1991, Microsoft capitalized on Word for Windows' increasing popularity by releasing a version of Word for DOS, version 5.5, that replaced its unique user interface with an interface similar to a Windows application. When Microsoft became aware of the Year 2000 problem, it made Microsoft Word 5.5 for DOS available for download free. , it is still available for download from Microsoft's web site.
In 1991, Microsoft embarked on a project code-named Pyramid to completely rewrite Microsoft Word from the ground up. Both the Windows and Mac OS versions would start from the same code base. It was abandoned when it was determined that it would take the development team too long to rewrite and then catch up with all the new capabilities that could have been added at the same time without a rewrite. Instead, the next versions of Word for Windows and Mac OS, dubbed version 6.0, both started from the code base of Word for Windows 2.0.
With the release of Word 6.0 in 1993, Microsoft again attempted to synchronize the version numbers and coordinate product naming across platforms, this time across DOS, Mac OS, and Windows (this was the last version of Word for DOS). It introduced AutoCorrect, which automatically fixed certain typing errors, and AutoFormat, which could reformat many parts of a document at once. While the Windows version received favorable reviews (e.g., from InfoWorld), the Mac OS version was widely derided. Many accused it of being slow, clumsy and memory intensive, and its user interface differed significantly from Word 5.1. In response to user requests, Microsoft offered Word 5 again, after it had been discontinued. Subsequent versions of Word for macOS are no longer direct ports of Word for Windows, instead featuring a mixture of ported code and native code.
Word for Windows
Word for Windows is available stand-alone or as part of the Microsoft Office suite. Word contains rudimentary desktop publishing capabilities and is the most widely used word processing program on the market. Word files are commonly used as the format for sending text documents via e-mail because almost every user with a computer can read a Word document by using the Word application, a Word viewer or a word processor that imports the Word format (see Microsoft Word Viewer).
Word 6 for Windows NT was the first 32-bit version of the product, released with Microsoft Office for Windows NT around the same time as Windows 95. It was a straightforward port of Word 6.0. Starting with Word 95, releases of Word were named after the year of its release, instead of its version number.
Word 2007 introduced a redesigned user interface that emphasised the most common controls, dividing them into tabs, and adding specific options depending on the context, such as selecting an image or editing a table. This user interface, called Ribbon, was included in Excel, PowerPoint and Access 2007, and would be later introduced to other Office applications with Office 2010 and Windows applications such as Paint and WordPad with Windows 7, respectively.
The redesigned interface also includes a toolbar that appears when selecting text, with options for formatting included.
Word 2007 also included the option to save documents as Adobe Acrobat or XPS files, and upload Word documents as blog posts on services such as WordPress.
Word 2010 allows the customization of the Ribbon, adds a Backstage view for file management, has improved document navigation, allows creation and embedding of screenshots, and integrates with online services such as Microsoft OneDrive.
Word 2019 added a dictation function.
Word for Mac
The Mac was introduced January 24, 1984, and Microsoft introduced Word 1.0 for Mac a year later, on January 18, 1985. The DOS, Mac, and Windows versions are quite different from each other. Only the Mac version was WYSIWYG and used a graphical user interface, far ahead of the other platforms. Each platform restarted its version numbering at "1.0". There was no version 2 on the Mac, but version 3 came out on January 31, 1987, as described above. Word 4.0 came out on November 6, 1990, and added automatic linking with Excel, the ability to flow text around graphics and a WYSIWYG page view editing mode.
Word 5.1 for Mac, released in 1992 ran on the original 68000 CPU and was the last to be specifically designed as a Macintosh application. The later Word 6 was a Windows port and poorly received. Word 5.1 continued to run well until the last Classic MacOS. Many people continue to run Word 5.1 to this day under an emulated Mac classic system for some of its excellent features like document generation and renumbering or to access their old files.
In 1997, Microsoft formed the Macintosh Business Unit as an independent group within Microsoft focused on writing software for Mac OS. Its first version of Word, Word 98, was released with Office 98 Macintosh Edition. Document compatibility reached parity with Word 97, and it included features from Word 97 for Windows, including spell and grammar checking with squiggles. Users could choose the menus and keyboard shortcuts to be similar to either Word 97 for Windows or Word 5 for Mac OS.
Word 2001, released in 2000, added a few new features, including the Office Clipboard, which allowed users to copy and paste multiple items. It was the last version to run on classic Mac OS and, on Mac OS X, it could only run within the Classic Environment. Word X, released in 2001, was the first version to run natively on, and required, Mac OS X, and introduced non-contiguous text selection.
Word 2004 was released in May 2004. It included a new Notebook Layout view for taking notes either by typing or by voice. Other features, such as tracking changes, were made more similar with Office for Windows.
Word 2008, released on January 15, 2008, included a Ribbon-like feature, called the Elements Gallery, that can be used to select page layouts and insert custom diagrams and images. It also included a new view focused on publishing layout, integrated bibliography management, and native support for the new Office Open XML format. It was the first version to run natively on Intel-based Macs.
Word 2011, released in October 2010, replaced the Elements Gallery in favor of a Ribbon user interface that is much more similar to Office for Windows, and includes a full-screen mode that allows users to focus on reading and writing documents, and support for Office Web Apps.
Word for Mobile
Word Mobile is a word processor that allows creating and editing documents. It supports basic formatting, such as bolding, changing font size, and changing colors (from red, yellow, or green). It can add comments, but can't edit documents with tracked changes. It can't open password protected documents, change the typeface, text alignment, or style (normal, heading 1); create bulleted lists; insert pictures; or undo. Word Mobile is neither able to display nor insert footnotes, endnotes, page headers, page footers, page breaks, certain indentation of lists, and certain fonts while working on a document, but retains them if the original document has them. In addition to the features of the 2013 version, the 2007 version on Windows Mobile also has the ability to save documents in the Rich Text Format and open legacy PSW (Pocket Word). Furthermore, it includes a spell checker, word count tool, and a "Find and Replace" command. In 2015, Word Mobile became available for Windows 10 and Windows 10 Mobile on Windows Store.
File formats
Filename extensions
Microsoft Word's native file formats are denoted either by a .doc or .docx filename extension.
Although the .doc extension has been used in many different versions of Word, it actually encompasses four distinct file formats:
Word for DOS
Word for Windows 1 and 2; Word 3 and 4 for Mac OS
Word 6 and Word 95 for Windows; Word 6 for Mac OS
Word 97 and later for Windows; Word 98 and later for Mac OS
(The classic Mac OS of the era did not use filename extensions.)
The newer .docx extension signifies the Office Open XML international standard for Office documents and is used by default by Word 2007 and later for Windows as well as Word 2008 and later for macOS.
Binary formats (Word 97–2007)
During the late 1990s and early 2000s, the default Word document format (.DOC) became a de facto standard of document file formats for Microsoft Office users. There are different versions of "Word Document Format" used by default in Word 97–2007. Each binary word file is a Compound File, a hierarchical file system within a file. According to Joel Spolsky, Word Binary File Format is extremely complex mainly because its developers had to accommodate an overwhelming number of features and prioritize performance over anything else.
As with all OLE Compound Files, Word Binary Format consists of "storages", which are analogous to computer folders, and "streams", which are similar to computer files. Each storage may contain streams or other storage. Each Word Binary File must contain a stream called "WordDocument" stream and this stream must start with a File Information Block (FIB). FIB serves as the first point of reference for locating everything else, such as where the text in a Word document starts, ends, what version of Word created the document and other attributes.
Word 2007 and later continue to support the DOC file format, although it is no longer the default.
XML Document (Word 2003)
The .docx XML format introduced in Word 2003 was a simple, XML-based format called WordProcessingML or WordML .
The Microsoft Office XML formats are XML-based document formats (or XML schemas) introduced in versions of Microsoft Office prior to Office 2007. Microsoft Office XP introduced a new XML format for storing Excel spreadsheets and Office 2003 added an XML-based format for Word documents.
These formats were succeeded by Office Open XML (ECMA-376) in Microsoft Office 2007.
Cross-version compatibility
Opening a Word Document file in a version of Word other than the one with which it was created can cause an incorrect display of the document. The document formats of the various versions change in subtle and not so subtle ways (such as changing the font, or the handling of more complex tasks like footnotes). Formatting created in newer versions does not always survive when viewed in older versions of the program, nearly always because that capability does not exist in the previous version. Rich Text Format (RTF), an early effort to create a format for interchanging formatted text between applications, is an optional format for Word that retains most formatting and all content of the original document.
Third-party formats
Plugins permitting the Windows versions of Word to read and write formats it does not natively support, such as international standard OpenDocument format (ODF) (ISO/IEC 26300:2006), are available. Up until the release of Service Pack 2 (SP2) for Office 2007, Word did not natively support reading or writing ODF documents without a plugin, namely the SUN ODF Plugin or the OpenXML/ODF Translator. With SP2 installed, ODF format 1.1 documents can be read and saved like any other supported format in addition to those already available in Word 2007. The implementation faces substantial criticism, and the ODF Alliance and others have claimed that the third-party plugins provide better support. Microsoft later declared that the ODF support has some limitations.
In October 2005, one year before the Microsoft Office 2007 suite was released, Microsoft declared that there was insufficient demand from Microsoft customers for the international standard OpenDocument format support, and that therefore it would not be included in Microsoft Office 2007. This statement was repeated in the following months. As an answer, on October 20, 2005 an online petition was created to demand ODF support from Microsoft.
In May 2006, the ODF plugin for Microsoft Office was released by the OpenDocument Foundation. Microsoft declared that it had no relationship with the developers of the plugin.
In July 2006, Microsoft announced the creation of the Open XML Translator project – tools to build a technical bridge between the Microsoft Office Open XML Formats and the OpenDocument Format (ODF). This work was started in response to government requests for interoperability with ODF. The goal of project was not to add ODF support to Microsoft Office, but only to create a plugin and an external tool-set. In February 2007, this project released a first version of the ODF plugin for Microsoft Word.
In February 2007, Sun released an initial version of its ODF plugin for Microsoft Office. Version 1.0 was released in July 2007.
Microsoft Word 2007 (Service Pack 1) supports (for output only) PDF and XPS formats, but only after manual installation of the Microsoft 'Save as PDF or XPS' add-on. On later releases, this was offered by default.
Features and flaws
Among its features, Word includes a built-in spell checker, a thesaurus, a dictionary, and utilities for manipulating and editing text. The following are some aspects of its feature set.
Templates
Several later versions of Word include the ability for users to create their own formatting templates, allowing them to define a file in which the title, heading, paragraph, and other element designs differ from the standard Word templates. Users can find how to do this under the Help section located near the top right corner (Word 2013 on Windows 8).
For example, Normal.dotm is the master template from which all Word documents are created. It determines the margin defaults as well as the layout of the text and font defaults. Although Normal.dotm is already set with certain defaults, the user can change it to new defaults. This will change other documents which were created using the template. It was previously Normal.dot.
Image formats
Word can import and display images in common bitmap formats such as JPG and GIF. It can also be used to create and display simple line-art. Microsoft Word added support for the common SVG vector image format in 2017 for Office 365 ProPlus subscribers and this functionality was also included in the Office 2019 release.
WordArt
WordArt enables drawing text in a Microsoft Word document such as a title, watermark, or other text, with graphical effects such as skewing, shadowing, rotating, stretching in a variety of shapes and colors and even including three-dimensional effects. Users can apply formatting effects such as shadow, bevel, glow, and reflection to their document text as easily as applying bold or underline. Users can also spell-check text that uses visual effects, and add text effects to paragraph styles.
Macros
A Macro is a rule of pattern that specifies how a certain input sequence (often a sequence of characters) should be mapped to an output sequence according to a defined process. Frequently used or repetitive sequences of keystrokes and mouse movements can be automated.
Like other Microsoft Office documents, Word files can include advanced macros and even embedded programs. The language was originally WordBasic, but changed to Visual Basic for Applications as of Word 97.
This extensive functionality can also be used to run and propagate viruses in documents. The tendency for people to exchange Word documents via email, USB flash drives, and floppy disks made this an especially attractive vector in 1999. A prominent example was the Melissa virus, but countless others have existed.
These macro viruses were the only known cross-platform threats between Windows and Macintosh computers and they were the only infection vectors to affect any macOS system up until the advent of video codec trojans in 2007. Microsoft released patches for Word X and Word 2004 that effectively eliminated the macro problem on the Mac by 2006.
Word's macro security setting, which regulates when macros may execute, can be adjusted by the user, but in the most recent versions of Word, it is set to HIGH by default, generally reducing the risk from macro-based viruses, which have become uncommon.
Layout issues
Before Word 2010 (Word 14) for Windows, the program was unable to correctly handle ligatures defined in OpenType fonts. Those ligature glyphs with Unicode codepoints may be inserted manually, but are not recognized by Word for what they are, breaking spell checking, while custom ligatures present in the font are not accessible at all. Since Word 2010, the program now has advanced typesetting features which can be enabled: OpenType ligatures, kerning, and hyphenation (previous versions already had the latter two features). Other layout deficiencies of Word include the inability to set crop marks or thin spaces. Various third-party workaround utilities have been developed.
In Word 2004 for Mac OS X, support of complex scripts was inferior even to Word 97, and Word 2004 did not support Apple Advanced Typography features like ligatures or glyph variants.
Bullets and numbering
Microsoft Word supports bullet lists and numbered lists. It also features a numbering system that helps add correct numbers to pages, chapters, headers, footnotes, and entries of tables of content; these numbers automatically change to correct ones as new items are added or existing items are deleted. Bullets and numbering can be applied directly to paragraphs and convert them to lists. Word 97 through 2003, however, had problems adding correct numbers to numbered lists. In particular, a second irrelevant numbered list might have not started with number one but instead resumed numbering after the last numbered list. Although Word 97 supported a hidden marker that said the list numbering must restart afterward, the command to insert this marker (Restart Numbering command) was only added in Word 2003. However, if one were to cut the first item of the listed and paste it as another item (e.g. fifth), then the restart marker would have moved with it and the list would have restarted in the middle instead of at the top.
Users can also create tables in Word. Depending on the version, Word can perform simple calculations — along with support for formulas and equations as well.
Word continues to default to non-Unicode characters and non-hierarchical bulleting, despite user preference for Powerpoint-style symbol hierarchies (e.g., filled circle/emdash/filled square/endash/emptied circle) and universal compatibility.
AutoSummarize
Available in certain versions of Word (e.g., Word 2007), AutoSummarize highlights passages or phrases that it considers valuable and can be a quick way of generating a crude abstract or an executive summary. The amount of text to be retained can be specified by the user as a percentage of the current amount of text.
According to Ron Fein of the Word 97 team, AutoSummarize cuts wordy copy to the bone by counting words and ranking sentences. First, AutoSummarize identifies the most common words in the document (barring "a" and "the" and the like) and assigns a "score" to each word – the more frequently a word is used, the higher the score. Then, it "averages" each sentence by adding the scores of its words and dividing the sum by the number of words in the sentence – the higher the average, the higher the rank of the sentence. "It's like the ratio of wheat to chaff," explains Fein.
AutoSummarize was removed from Microsoft Word for Mac OS X 2011, although it was present in Word for Mac 2008. AutoSummarize was removed from the Office 2010 release version (14) as well.
Word for the web
Word for the web is a free lightweight version of Microsoft Word available as part of Office on the web, which also includes web versions of Microsoft Excel and Microsoft PowerPoint.
Word for the web lacks some Ribbon tabs, such as Design and Mailings. Mailings allows users to print envelopes and labels, and manage mail merge printing of Word documents. Word for the web is not able to edit certain objects, such as equations, shapes, text boxes, or drawings, but a placeholder may be present in the document. Certain advanced features like table sorting or columns will not be displayed but are preserved as they were in the document. Other views available in the Word desktop app (Outline, Draft, Web Layout, and Full Screen Reading) are not available, nor are side-by-side viewing, split windows, and the ruler.
Password protection
There are three password types that can be set in Microsoft Word:
Password to open a document
Password to modify a document
Password restricting formatting and editing
The second and third password types were developed by Microsoft for convenient shared use of documents rather than for their protection. There is no encryption of documents that are protected by such passwords, and the Microsoft Office protection system saves a hash sum of a password in a document's header where it can be easily accessed and removed by the specialized software.
Password to open a document offers much tougher protection that had been steadily enhanced in the subsequent editions of Microsoft Office.
Word 95 and all the preceding editions had the weakest protection that utilized a conversion of a password to a 16-bit key.
Key length in Word 97 and 2000 was strengthened up to 40 bit. However, modern cracking software allows removing such a password very quickly – a persistent cracking process takes one week at most. Use of rainbow tables reduces password removal time to several seconds. Some password recovery software can not only remove a password but also find an actual password that was used by a user to encrypt the document using brute-force attack approach. Statistically, the possibility of recovering the password depends on the password strength.
Word's 2003/XP version default protection remained the same but an option that allowed advanced users choosing a Cryptographic Service Provider was added. If a strong CSP is chosen, guaranteed document decryption becomes unavailable, and therefore a password can't be removed from the document. Nonetheless, a password can be fairly quickly picked with a brute-force attack, because its speed is still high regardless of the CSP selected. Moreover, since the CSPs are not active by default, their use is limited to advanced users only.
Word 2007 offers significantly more secure document protection which utilizes the modern Advanced Encryption Standard (AES) that converts a password to a 128-bit key using a SHA-1 hash function 50,000 times. It makes password removal impossible (as of today, no computer that can pick the key in a reasonable amount of time exists), and drastically slows the brute-force attack speed down to several hundreds of passwords per second.
Word's 2010 protection algorithm was not changed apart from the increasing number of SHA-1 conversions up to 100,000 times, and consequently, the brute-force attack speed decreased two times more.
Reception
Initial releases of Word were met with criticism. BYTE in 1984 criticized the documentation for Word 1.1 and 2.0 for DOS, calling it "a complete farce". It called the software "clever, put together well, and performs some extraordinary feats", but concluded that "especially when operated with the mouse, has many more limitations than benefits ... extremely frustrating to learn and operate efficiently". PC Magazine review was very mixed, stating "I've run into weird word processors before, but this is the first time one's nearly knocked me down for the count" but acknowledging that Word's innovations were the first that caused the reviewer to consider abandoning WordStar. While the review cited an excellent WYSIWYG display, sophisticated print formatting, windows, and footnoting as merits, it criticized many small flaws, very slow performance, and "documentation apparently produced by Madame Sadie's Pain Palace". It concluded that Word was "two releases away from potential greatness".
Compute!'s Apple Applications in 1987 stated that "despite a certain awkwardness", Word 3.01 "will likely become the major Macintosh word processor" with "far too many features to list here". While criticizing the lack of true WYSIWYG, the magazine concluded that "Word is marvelous. It's like a Mozart or Edison, whose occasional gaucherie we excuse because of his great gifts".
Compute! in 1989 stated that Word 5.0's integration of text and graphics made it "a solid engine for basic desktop publishing". The magazine approved of improvements to text mode, described the $75 price for upgrading from an earlier version as "the deal of the decade", and concluded that "as a high-octane word processor, Word is definitely worth a look".
During the first quarter of 1996, Microsoft Word accounted for 80% of the worldwide word processing market.
Release history
References
Further reading
Tsang, Cheryl. Microsoft: First Generation. New York: John Wiley & Sons, Inc. .
Liebowitz, Stan J. & Margolis, Stephen E. Winners, Losers & Microsoft: Competition and Antitrust in High Technology Oakland: Independent Institute. .
External links
– official site
Find and replace text by using regular expressions (Advanced) - archived official support website
Word
Classic Mac OS word processors
DOS word processors
MacOS word processors
Windows word processors
Technical communication tools
Screenshot software
1983 software
Atari ST software |
20288 | https://en.wikipedia.org/wiki/Microsoft%20Office | Microsoft Office | Microsoft Office, or simply Office, is a family of client software, server software, and services developed by Microsoft. It was first announced by Bill Gates on August 1, 1988, at COMDEX in Las Vegas. Initially a marketing term for an office suite (bundled set of productivity applications), the first version of Office contained Microsoft Word, Microsoft Excel, and Microsoft PowerPoint. Over the years, Office applications have grown substantially closer with shared features such as a common spell checker, OLE data integration and Visual Basic for Applications scripting language. Microsoft also positions Office as a development platform for line-of-business software under the Office Business Applications brand. On July 10, 2012, Softpedia reported that Office was being used by over a billion people worldwide.
Office is produced in several versions targeted towards different end-users and computing environments. The original, and most widely used version, is the desktop version, available for PCs running the Windows and macOS operating systems. Microsoft also maintains mobile apps for Android and iOS. Office on the web is a version of the software that runs within a web browser.
Since Office 2013, Microsoft has promoted Office 365 as the primary means of obtaining Microsoft Office: it allows the use of the software and other services on a subscription business model, and users receive feature updates to the software for the lifetime of the subscription, including new features and cloud computing integration that are not necessarily included in the "on-premises" releases of Office sold under conventional license terms. In 2017, revenue from Office 365 overtook conventional license sales. Microsoft also rebranded most of their standard Office 365 editions into Microsoft 365 to emphasize their current inclusion of products and services.
The current on-premises, desktop version of Office is Office 2021, released on October 5, 2021.
Components
Core apps and services
Microsoft Word is a word processor included in Microsoft Office and some editions of the now-discontinued Microsoft Works. The first version of Word, released in the autumn of 1983, was for the MS-DOS operating system and introduced the computer mouse to more users. Word 1.0 could be purchased with a bundled mouse, though none was required. Following the precedents of LisaWrite and MacWrite, Word for Macintosh attempted to add closer WYSIWYG features into its package. Word for Mac was released in 1985. Word for Mac was the first graphical version of Microsoft Word. Initially, it implemented the proprietary .doc format as its primary format. Word 2007, however, deprecated this format in favor of Office Open XML, which was later standardized by Ecma International as an open format. Support for Portable Document Format (PDF) and OpenDocument (ODF) was first introduced in Word for Windows with Service Pack 2 for Word 2007.
Microsoft Excel is a spreadsheet editor that originally competed with the dominant Lotus 1-2-3 and eventually outsold it. Microsoft released the first version of Excel for the Mac OS in 1985 and the first Windows version (numbered 2.05 to line up with the Mac) in November 1987.
Microsoft PowerPoint is a presentation program used to create slideshows composed of text, graphics, and other objects, which can be displayed on-screen and shown by the presenter or printed out on transparencies or slides.
Microsoft OneNote is a notetaking program that gathers handwritten or typed notes, drawings, screen clippings and audio commentaries. Notes can be shared with other OneNote users over the Internet or a network. OneNote was initially introduced as a standalone app that was not included in any Microsoft Office 2003 edition. However, OneNote eventually became a core component of Microsoft Office; with the release of Microsoft Office 2013, OneNote was included in all Microsoft Office offerings. OneNote is also available as a web app on Office on the web, a freemium (and later freeware) Windows desktop app, a mobile app for Windows Phone, iOS, Android, and Symbian, and a Metro-style app for Windows 8 or later.
Microsoft Outlook (not to be confused with Outlook Express, Outlook.com or Outlook on the web) is a personal information manager that replaces Windows Messaging, Microsoft Mail, and Schedule+ starting in Office 97; it includes an e-mail client, calendar, task manager and address book. On the Mac OS, Microsoft offered several versions of Outlook in the late 1990s, but only for use with Microsoft Exchange Server. In Office 2001, it introduced an alternative application with a slightly different feature set called Microsoft Entourage. It reintroduced Outlook in Office 2011, replacing Entourage.
Microsoft OneDrive is a file hosting service that allows users to sync files and later access them from a web browser or mobile device.
Microsoft Teams is a platform that combines workplace chat, meetings, notes, and attachments.
Windows-only apps
Microsoft Publisher is a desktop publishing app for Windows mostly used for designing brochures, labels, calendars, greeting cards, business cards, newsletters, web sites, and postcards.
Microsoft Access is a database management system for Windows that combines the relational Access Database Engine (formerly Jet Database Engine) with a graphical user interface and software development tools. Microsoft Access stores data in its own format based on the Access Database Engine. It can also import or link directly to data stored in other applications and databases.
Microsoft Project is a project management app for Windows to keep track of events and to create network charts and Gantt charts, not bundled in any Office suite.
Microsoft Visio is a diagram and flowcharting app for Windows not bundled in any Office suite.
Mobile-only apps
Office Lens is an image scanner optimized for mobile devices. It captures the document (e.g. business card, paper, whiteboard) via the camera and then straightens the document portion of the image. The result can be exported to Word, OneNote, PowerPoint or Outlook, or saved in OneDrive, sent via Mail or placed in Photo Library.
Office Mobile is a unified Office mobile app for Android and iOS, which combines Word, Excel, and PowerPoint into a single app and introduces new capabilities as making quick notes, signing PDFs, scanning QR codes, and transferring files.
Office Remote is an application that turns the mobile device into a remote control for desktop versions of Word, Excel and PowerPoint.
Server applications
Microsoft SharePoint is a web-based collaborative platform that integrates with Microsoft Office. Launched in 2001, SharePoint is primarily sold as a document management and storage system, but the product is highly configurable and usage varies substantially among organizations. SharePoint services include:
Excel Services is a spreadsheet editing server similar to Microsoft Excel.
InfoPath Forms Services is a form distribution server similar to Microsoft InfoPath.
Microsoft Project Server is a project management server similar to Microsoft Project.
Microsoft Search Server
Skype for Business Server is a real-time communications server for instant messaging and video-conferencing.
Microsoft Exchange Server is a mail server and calendaring server.
Web services
Microsoft Sway is a presentation web app released in October 2014. It also has a native app for iOS and Windows 10.
Delve is a service that allows Office 365 users to search and manage their emails, meetings, contacts, social networks and documents stored on OneDrive or Sites in Office 365.
Microsoft Forms is an online survey creator, available for Office 365 Education subscribers.
Microsoft To Do is a task management service.
Outlook.com is a free webmail with a user interface similar to Microsoft Outlook.
Outlook on the web is a webmail client similar to Outlook.com but more comprehensive and available only through Office 365 and Microsoft Exchange Server offerings.
Microsoft Planner is a planning application available on the Microsoft Office 365 platform.
Microsoft Stream is a corporate video sharing service for enterprise users with an Office 365 Academic or Enterprise license.
Microsoft Bookings is an appointment booking application on the Microsoft Office 365 platform.
Office on the web
Office on the web is a free lightweight web version of Microsoft Office and primarily includes three web applications: Word, Excel and Powerpoint. The offering also includes Outlook.com, OneNote and OneDrive which are accessible through a unified app switcher. Users can install the on-premises version of this service, called Office Online Server, in private clouds in conjunction with SharePoint, Microsoft Exchange Server and Microsoft Lync Server.
Word, Excel, and PowerPoint on the web can all natively open, edit, and save Office Open XML files (docx, xlsx, pptx) as well as OpenDocument files (odt, ods, odp). They can also open the older Office file formats (doc, xls, ppt), but will be converted to the newer Open XML formats if the user wishes to edit them online. Other formats cannot be opened in the browser apps, such as CSV in Excel or HTML in Word, nor can Office files that are encrypted with a password be opened. Files with macros can be opened in the browser apps, but the macros cannot be accessed or executed. Starting in July 2013, Word can render PDF documents or convert them to Microsoft Word documents, although the formatting of the document may deviate from the original. Since November 2013, the apps have supported real-time co-authoring and autosaving files.
Office on the web lacks a number of the advanced features present in the full desktop versions of Office, including lacking the programs Access and Publisher entirely. However, users are able to select the command "Open in Desktop App" that brings up the document in the desktop version of Office on their computer or device to utilize the advanced features there.
Supported web browsers include Microsoft Edge, Internet Explorer 11, the latest versions of Firefox or Google Chrome, as well as Safari for OS X 10.8 or later.
The Personal edition of Office on the web is available to the general public free of charge with a Microsoft account through the Office.com website, which superseded SkyDrive (now OneDrive) and Office Live Workspace. Enterprise-managed versions are available through Office 365. In February 2013, the ability to view and edit files on SkyDrive without signing in was added. The service can also be installed privately in enterprise environments as a SharePoint app, or through Office Web Apps Server. Microsoft also offers other web apps in the Office suite, such as the Outlook Web App (formerly Outlook Web Access), Lync Web App (formerly Office Communicator Web Access), Project Web App (formerly Project Web Access). Additionally, Microsoft offers a service under the name of Online Doc Viewer to view Office documents on a website via Office on the web.
There are free extensions available to use Office on the web directly in Google Chrome and Microsoft Edge.
Common features
Most versions of Microsoft Office (including Office 97 and later) use their own widget set and do not exactly match the native operating system. This is most apparent in Microsoft Office XP and 2003, where the standard menus were replaced with a colored, flat-looking, shadowed menu style. The user interface of a particular version of Microsoft Office often heavily influences a subsequent version of Microsoft Windows. For example, the toolbar, colored buttons and the gray-colored 3D look of Office 4.3 were added to Windows 95, and the ribbon, introduced in Office 2007, has been incorporated into several programs bundled with Windows 7 and later. In 2012, Office 2013 replicated the flat, box-like design of Windows 8.
Users of Microsoft Office may access external data via connection-specifications saved in Office Data Connection (.odc) files.
Both Windows and Office use service packs to update software. Office had non-cumulative service releases, which were discontinued after Office 2000 Service Release 1.
Past versions of Office often contained Easter eggs. For example, Excel 97 contained a reasonably functional flight-simulator.
File formats and metadata
Microsoft Office prior to Office 2007 used proprietary file formats based on the OLE Compound File Binary Format. This forced users who share data to adopt the same software platform. In 2008, Microsoft made the entire documentation for the binary Office formats freely available for download and granted any possible patents rights for use or implementations of those binary format for free under the Open Specification Promise. Previously, Microsoft had supplied such documentation freely but only on request.
Starting with Office 2007, the default file format has been a version of Office Open XML, though different from the one standardized and published by Ecma International and by ISO/IEC. Microsoft has granted patent rights to the formats technology under the Open Specification Promise and has made available free downloadable converters for previous versions of Microsoft Office including Office 2003, Office XP, Office 2000 and Office 2004 for Mac OS X. Third-party implementations of Office Open XML exist on the Windows platform (LibreOffice, all platforms), macOS platform (iWork '08, NeoOffice, LibreOffice) and Linux (LibreOffice and OpenOffice.org 3.0). In addition, Office 2010, Service Pack 2 for Office 2007, and Office 2016 for Mac supports the OpenDocument Format (ODF) for opening and saving documents – only the old ODF 1.0 (2006 ISO/IEC standard) is supported, not the 1.2 version (2015 ISO/IEC standard).
Microsoft provides the ability to remove metadata from Office documents. This was in response to highly publicized incidents where sensitive data about a document was leaked via its metadata. Metadata removal was first available in 2004, when Microsoft released a tool called Remove Hidden Data Add-in for Office 2003/XP for this purpose. It was directly integrated into Office 2007 in a feature called the Document Inspector.
Extensibility
A major feature of the Office suite is the ability for users and third-party companies to write add-ins (plug-ins) that extend the capabilities of an application by adding custom commands and specialized features. One of the new features is the Office Store. Plugins and other tools can be downloaded by users. Developers can make money by selling their applications in the Office Store. The revenue is divided between the developer and Microsoft where the developer gets 80% of the money. Developers are able to share applications with all Office users.
The app travels with the document, and it is for the developer to decide what the recipient will see when they open it. The recipient will either have the option to download the app from the Office Store for free, start a free trial or be directed to payment.
With Office's cloud abilities, IT department can create a set of apps for their business employees in order to increase their productivity. When employees go to the Office Store, they'll see their company's apps under My Organization. The apps that employees have personally downloaded will appear under My Apps. Developers can use web technologies like HTML5, XML, CSS3, JavaScript, and APIs for building the apps.
An application for Office is a webpage that is hosted inside an Office client application. User can use apps to amplify the functionality of a document, email message, meeting request, or appointment. Apps can run in multiple environments and by multiple clients, including rich Office desktop clients, Office Web Apps, mobile browsers, and also on-premises and in the cloud. The type of add-ins supported differ by Office versions:
Office 97 onwards (standard Windows DLLs i.e. Word WLLs and Excel XLLs)
Office 2000 onwards (COM add-ins)
Office XP onwards (COM/OLE Automation add-ins)
Office 2003 onwards (Managed code add-ins – VSTO solutions)
Password protection
Microsoft Office has a security feature that allows users to encrypt Office (Word, Excel, PowerPoint, Access, Skype Business) documents with a user-provided password. The password can contain up to 255 characters and uses AES 128-bit advanced encryption by default. Passwords can also be used to restrict modification of the entire document, worksheet or presentation. Due to lack of document encryption, though, these passwords can be removed using a third-party cracking software.
Support policies
Approach
All versions of Microsoft Office products from Office 2000 to Office 2016 are eligible for ten years of support following their release, during which Microsoft releases security updates for the product version and provides paid technical support. The ten-year period is divided into two five-year phases: The mainstream phase and the extended phase. During the mainstream phase, Microsoft may provide limited complimentary technical support and release non-security updates or change the design of the product. During the extended phase, said services stop. Office 2019 only receives 5 years of mainstream and 2 years of extended support and Office 2021 only gets 5 years of mainstream support.
Timelines of support
Platforms
Microsoft supports Office for the Windows and macOS platforms, as well as mobile versions for Windows Phone, Android and iOS platforms. Beginning with Mac Office 4.2, the macOS and Windows versions of Office share the same file format, and are interoperable. Visual Basic for Applications support was dropped in Microsoft Office 2008 for Mac, then reintroduced in Office for Mac 2011.
Microsoft tried in the mid-1990s to port Office to RISC processors such as NEC/MIPS and IBM/PowerPC, but they met problems such as memory access being hampered by data structure alignment requirements. Microsoft Word 97 and Excel 97, however, did ship for the DEC Alpha platform. Difficulties in porting Office may have been a factor in discontinuing Windows NT on non-Intel platforms.
Pricing model and editions
The Microsoft Office applications and suites are sold via retail channels, and volume licensing for larger organizations (also including the "Home Use Program". allowing users at participating organizations to buy low-cost licenses for use on their personal devices as part of their employer's volume license agreement).
In 2010, Microsoft introduced a software as a service platform known as Office 365, to provide cloud-hosted versions of Office's server software, including Exchange e-mail and SharePoint, on a subscription basis (competing in particular with Google Apps). Following the release of Office 2013, Microsoft began to offer Office 365 plans for the consumer market, with access to Microsoft Office software on multiple devices with free feature updates over the life of the subscription, as well as other services such as OneDrive storage.
Microsoft has since promoted Office 365 as the primary means of purchasing Microsoft Office. Although there are still "on-premises" releases roughly every three years, Microsoft marketing emphasizes that they do not receive new features or access to new cloud-based services as they are released unlike Office 365, as well as other benefits for consumer and business markets. Office 365 revenue overtook traditional license sales for Office in 2017.
Editions
Microsoft Office is available in several editions, which regroup a given number of applications for a specific price. Primarily, Microsoft sells Office as Microsoft 365. The editions are as follows:
Microsoft 365 Personal
Microsoft 365 Family
Microsoft 365 Business Basic
Microsoft 365 Business Standard
Microsoft 365 Business Premium
Microsoft 365 apps for business
Microsoft 365 apps for enterprise
Office 365 E1, E3, E5
Office 365 A1, A3, A5 (for education)
Office 365 G1, G3, G5 (for government)
Microsoft 365 F1, F3, Office 365 F3 (for frontline)
Microsoft sells Office for a one-time purchase as Home & Student and Home & Business, however, these editions do not receive major updates.
Education pricing
Post-secondary students may obtain the University edition of Microsoft Office 365 subscription. It is limited to one user and two devices, plus the subscription price is valid for four years instead of just one. Apart from this, the University edition is identical in features to the Home Premium version. This marks the first time Microsoft does not offer physical or permanent software at academic pricing, in contrast to the University versions of Office 2010 and Office 2011. In addition, students eligible for DreamSpark program may receive select standalone Microsoft Office apps free of charge.
Discontinued applications and features
Binder was an application that can incorporate several documents into one file and was originally designed as a container system for storing related documents in a single file. The complexity of use and learning curve led to little usage, and it was discontinued after Office XP.
Bookshelf was a reference collection introduced in 1987 as part of Microsoft's extensive work in promoting CD-ROM technology as a distribution medium for electronic publishing.
Data Analyzer was a business intelligence program for graphical visualization of data and its analysis.
Docs.com was a public document sharing service where Office users can upload and share Word, Excel, PowerPoint, Sway and PDF files for the whole world to discover and use.
Entourage was an Outlook counterpart on macOS, Microsoft discontinued it in favor of extending the Outlook brand name.
FrontPage was a WYSIWYG HTML editor and website administration tool for Windows. It was branded as part of the Microsoft Office suite from 1997 to 2003. FrontPage was discontinued in December 2006 and replaced by Microsoft SharePoint Designer and Microsoft Expression Web.
InfoPath was a Windows application for designing and distributing rich XML-based forms. The last version was included in Office 2013.
InterConnect was a business-relationship database available only in Japan.
Internet Explorer was a graphical web browser and one of the main participants of the first browser war. It was included in Office until Office XP when it was removed.
Mail was a mail client (in old versions of Office, later replaced by Microsoft Schedule Plus and subsequently Microsoft Outlook).
Office Accounting (formerly Small Business Accounting) was an accounting software application from Microsoft targeted towards small businesses that had between 1 and 25 employees.
Office Assistant (included since Office 97 on Windows and Office 98 on Mac as a part of Microsoft Agent technology) was a system that uses animated characters to offer context-sensitive suggestions to users and access to the help system. The Assistant is often dubbed "Clippy" or "Clippit", due to its default to a paper clip character, coded as CLIPPIT.ACS. The latest versions that include the Office Assistant were Office 2003 (Windows) and Office 2004 (Mac).
Office Document Image Writer was a virtual printer that takes documents from Microsoft Office or any other application and prints them, or stores them in an image file as TIFF or Microsoft Document Imaging Format format. It was discontinued with Office 2010.
Office Document Imaging was an application that supports editing scanned documents. Discontinued Office 2010.
Office Document Scanning was a scanning and OCR application. Discontinued Office 2010.
Office Picture Manager was a basic photo management software (similar to Google's Picasa or Adobe's Photoshop Elements), that replaced Microsoft Photo Editor.
PhotoDraw was a graphics program that was first released as part of the Office 2000 Premium Edition. A later version for Windows XP compatibility was released, known as PhotoDraw 2000 Version 2. Microsoft discontinued the program in 2001.
Photo Editor was photo-editing or raster-graphics software in older Office versions up to Office XP. It was supplemented by Microsoft PhotoDraw in Office 2000 Premium edition.
Schedule Plus (also shown as Schedule+) was released with Office 95. It featured a planner, to-do list, and contact information. Its functions were incorporated into Microsoft Outlook.
SharePoint Designer was a WYSIWYG HTML editor and website administration tool. Microsoft attempted to turn it into a specialized HTML editor for SharePoint sites, but failed on this project and wanted to discontinue it.
SharePoint Workspace (formerly Groove) was a proprietary peer-to-peer document collaboration software designed for teams with members who are regularly offline or who do not share the same network security clearance.
Skype for Business was an integrated communications client for conferences and meetings in real-time; it is the only Microsoft Office desktop app that is neither useful without a proper network infrastructure nor has the "Microsoft" prefix in its name.
Streets & Trips (known in other countries as Microsoft AutoRoute) is a discontinued mapping program developed and distributed by Microsoft.
Unbind is a program that can extract the contents of a Binder file. Unbind can be installed from the Office XP CD-ROM.
Virtual PC was included with Microsoft Office Professional Edition 2004 for Mac. Microsoft discontinued support for Virtual PC on the Mac in 2006 owing to new Macs possessing the same Intel architecture as Windows PCs. It emulated a standard PC and its hardware.
Vizact was a program that "activated" documents using HTML, adding effects such as animation. It allows users to create dynamic documents for the Web. The development has ended due to unpopularity.
Discontinued server applications
Microsoft Office Forms Server lets users use any browser to access and fill InfoPath forms. Office Forms Server is a standalone server installation of InfoPath Forms Services.
Microsoft Office Groove Server was centrally managing all deployments of Microsoft Office Groove in the enterprise.
Microsoft Office Project Portfolio Server allows creation of a project portfolio, including workflows, which is hosted centrally.
Microsoft Office PerformancePoint Server allows customers to monitor, analyze, and plan their business.
Discontinued web services
Office Live
Office Live Small Business had web hosting services and online collaboration tools for small businesses.
Office Live Workspace had online storage and collaboration service for documents, which was superseded by Office on the web.
Office Live Meeting was a web conferencing service.
Criticism
Editor
In January 2022, entrepreneur Vivek Ramaswamy appeared on Fox News and criticized changes to Microsoft Editor that substituted gender-neutral forms of some words for equivalent gendered terms: "postal carrier" or "mail carrier" in place of "mailman," for example.
Data formats
Microsoft Office has been criticized in the past for using proprietary file formats rather than open standards, which forces users who share data into adopting the same software platform. However, on February 15, 2008, Microsoft made the entire documentation for the binary Office formats freely available under the Open Specification Promise. Also, Office Open XML, the document format for the latest versions of Office for Windows and Mac, has been standardized under both Ecma International and ISO. Ecma International has published the Office Open XML specification free of copyrights and Microsoft has granted patent rights to the formats technology under the Open Specification Promise and has made available free downloadable converters for previous versions of Microsoft Office including Office 2003, Office XP, Office 2000 and Office 2004 for the Mac. Third-party implementations of Office Open XML exist on the Mac platform (iWork 08) and Linux (OpenOffice.org 2.3 – Novell Edition only).
Unicode and bi-directional texts
Another point of criticism Microsoft Office has faced was the lack of support in its Mac versions for Unicode and Bi-directional text languages, notably Arabic and Hebrew. This issue, which had existed since the first release in 1989, was addressed in the 2016 version.
Privacy
On November 13, 2018, a report initiated by the Government of the Netherlands concluded that Microsoft Office 2016 and Office 365 do not comply with GDPR, the European law which regulates data protection and privacy for all citizens in and outside the EU and EFTA region. The investigation was initiated by the observation that Microsoft does not reveal or share publicly any data collected about users of its software. In addition, the company does not provide users of its (Office) software an option to turn off diagnostic and telemetry data sent back to the company. Researchers found that most of the data that the Microsoft software collects and "sends home" is diagnostics. Researchers also observed that Microsoft "seemingly tried to make the system GDPR compliant by storing Office documents on servers based in the EU". However, they discovered the software packages collected additional data that contained private user information, some of which was stored on servers located in the US. The Netherlands Ministry of Justice hired Privacy Company to probe and evaluate the use of Microsoft Office products in the public sector. "Microsoft systematically collects data on a large scale about the individual use of Word, Excel, PowerPoint, and Outlook. Covertly, without informing people", researchers of the Privacy Company stated in their blog post. "Microsoft does not offer any choice with regard to the amount of data, or possibility to switch off the collection, or ability to see what data are collected, because the data stream is encoded."
The researchers commented that there is no need for Microsoft to store information such as IPs and email addresses, which are collected automatically by the software. "Microsoft should not store these transient, functional data, unless the retention is strictly necessary, for example, for security purposes", the researchers conclude in the final report by the Netherlands Ministry of Justice.
As a result of this in-depth study and its conclusions, the Netherlands regulatory body concluded that Microsoft has violated GDPR "on many counts" including "lack of transparency and purpose limitation, and the lack of a legal ground for the processing." Microsoft has provided the Dutch authorities with an "improvement plan" that should satisfy Dutch regulators that it "would end all violations". The Dutch regulatory body is monitoring the situation and states that "If progress is deemed insufficient or if the improvements offered are unsatisfactory, SLM Microsoft Rijk will reconsider its position and may ask the Data Protection Authority to carry out a prior consultation and to impose enforcement measures." When asked for a response by an IT professional publication, a Microsoft spokesperson stated: We are committed to our customers’ privacy, putting them in control of their data and ensuring that Office ProPlus and other Microsoft products and services comply with GDPR and other applicable laws. We appreciate the opportunity to discuss our diagnostic data handling practices in Office ProPlus with the Dutch Ministry of Justice and look forward to a successful resolution of any concerns." The user privacy data issue affects ProPlus subscriptions of Microsoft Office 2016 and Microsoft Office 365, including the online version of Microsoft Office 365.
History of releases
Version history
Windows versions
Microsoft Office for Windows
Microsoft Office for Windows started in October 1990 as a bundle of three applications designed for Microsoft Windows 3.0: Microsoft Word for Windows 1.1, Microsoft Excel for Windows 2.0, and Microsoft PowerPoint for Windows 2.0.
Microsoft Office for Windows 1.5 updated the suite with Microsoft Excel 3.0.
Version 1.6 added Microsoft Mail for PC Networks 2.1 to the bundle.
Microsoft Office 3.0
Microsoft Office 3.0, also called Microsoft Office 92, was released on August 30, 1992, and contained Word 2.0, Excel 4.0, PowerPoint 3.0 and Mail 3.0. It was the first version of Office also released on CD-ROM. In 1993, Microsoft Office Professional was released, which added Microsoft Access 1.1.
Microsoft Office 4.x
Microsoft Office 4.0 was released containing Word 6.0, Excel 4.0a, PowerPoint 3.0 and Mail in 1993. Word's version number jumped from 2.0 to 6.0 so that it would have the same version number as the MS-DOS and Macintosh versions (Excel and PowerPoint were already numbered the same as the Macintosh versions).
Microsoft Office 4.2 for Windows NT was released in 1994 for i386, Alpha, MIPS and PowerPC architectures, containing Word 6.0 and Excel 5.0 (both 32-bit, PowerPoint 4.0 (16-bit), and Microsoft Office Manager 4.2 (the precursor to the Office Shortcut Bar)).
Microsoft Office 95
Microsoft Office 95 was released on August 24, 1995. Software version numbers were altered again to create parity across the suiteevery program was called version 7.0 meaning all but Word missed out versions. Office 95 included new components to the suite such as Schedule+ and Binder. Office for Windows 95 was designed as a fully 32-bit version to match Windows 95 although some apps not bundled as part of the suite at that time - Publisher for Windows 95 and Project 95 had some 16-bit components even though their main program executable was 32-bit.
Office 95 was available in two versions, Office 95 Standard and Office 95 Professional. The standard version consisted of Word 7.0, Excel 7.0, PowerPoint 7.0, and Schedule+ 7.0. The professional edition contained all of the items in the standard version plus Access 7.0. If the professional version was purchased in CD-ROM form, it also included Bookshelf.
The logo used in Office 95 returns in Office 97, 2000 and XP. Microsoft Office 98 Macintosh Edition also uses a similar logo.
Microsoft Office 97
Microsoft Office 97 (Office 8.0) included hundreds of new features and improvements, such as introducing command bars, a paradigm in which menus and toolbars were made more similar in capability and visual design. Office 97 also featured Natural Language Systems and grammar checking. Office 97 featured new components to the suite including FrontPage 97, Expedia Streets 98 (in Small Business Edition), and Internet Explorer 3.0 & 4.0.
Office 97 was the first version of Office to include the Office Assistant. In Brazil, it was also the first version to introduce the Registration Wizard, a precursor to Microsoft Product Activation. With this release, the accompanying apps, Project 98 and Publisher 98 also transitioned to fully 32-bit versions. Exchange Server, a mail server and calendaring server developed by Microsoft, is the server for Outlook after discontinuing Exchange Client.
Microsoft Office 2000
Microsoft Office 2000 (Office 9.0) introduced adaptive menus, where little-used options were hidden from the user. It also introduced a new security feature, built around digital signatures, to diminish the threat of macro viruses. The Microsoft Script Editor, an optional tool that can edit script code, was also introduced in Office 2000. Office 2000 automatically trusts macros (written in VBA 6) that were digitally signed from authors who have been previously designated as trusted. Office 2000 also introduces PhotoDraw, a raster and vector imaging program, as well as Web Components, Visio, and Vizact.
The Registration Wizard, a precursor to Microsoft Product Activation, remained in Brazil and was also extended to Australia and New Zealand, though not for volume-licensed editions. Academic software in the United States and Canada also featured the Registration Wizard.
Microsoft Office XP
Microsoft Office XP (Office 10.0 or Office 2002) was released in conjunction with Windows XP, and was a major upgrade with numerous enhancements and changes over Office 2000. Office XP introduced the Safe Mode feature, which allows applications such as Outlook to boot when it might otherwise fail by bypassing a corrupted registry or a faulty add-in. Smart tag is a technology introduced with Office XP in Word and Excel and discontinued in Office 2010.
Office XP also introduces new components including Document Imaging, Document Scanning, Clip Organizer, MapPoint, and Data Analyzer. Binder was replaced by Unbind, a program that can extract the contents of a Binder file. Unbind can be installed from the Office XP CD-ROM.
Office XP includes integrated voice command and text dictation capabilities, as well as handwriting recognition. It was the first version to require Microsoft Product Activation worldwide and in all editions as an anti-piracy measure, which attracted widespread controversy. Product Activation remained absent from Office for Mac releases until it was introduced in Office 2011 for Mac.
Microsoft Office 2003
Microsoft Office 2003 (Office 11.0) was released in 2003. It featured a new logo. Two new applications made their debut in Office 2003: Microsoft InfoPath and OneNote. It is the first version to use new, more colorful icons. Outlook 2003 provides improved functionality in many areas, including Kerberos authentication, RPC over HTTP, Cached Exchange Mode, and an improved junk mail filter.
Office 2003 introduces three new programs to the Office product lineup: InfoPath, a program for designing, filling, and submitting electronic structured data forms; OneNote, a note-taking program for creating and organizing diagrams, graphics, handwritten notes, recorded audio, and text; and the Picture Manager graphics software which can open, manage, and share digital images.
SharePoint, a web collaboration platform codenamed as Office Server, has integration and compatibility with Office 2003 and so on.
Microsoft Office 2007
Microsoft Office 2007 (Office 12.0) was released in 2007. Office 2007's new features include a new graphical user interface called the Fluent User Interface, replacing the menus and toolbars that have been the cornerstone of Office since its inception with a tabbed toolbar, known as the Ribbon; new XML-based file formats called Office Open XML; and the inclusion of Groove, a collaborative software application.
While Microsoft removed Data Analyzer, FrontPage, Vizact, and Schedule+ from Office 2007; they also added Communicator, Groove, SharePoint Designer, and Office Customization Tool (OCT) to the suite.
Microsoft Office 2010
Microsoft Office 2010 (Office 14.0, Microsoft skipped 13.0 due to fear of 13) was finalized on April 15, 2010, and made available to consumers on June 15, 2010. The main features of Office 2010 include the backstage file menu, new collaboration tools, a customizable ribbon, protected view and a navigation panel. Office Communicator, an instant messaging and videotelephony application, was renamed into Lync 2010.
This is the first version to ship in 32-bit and 64-bit variants. Microsoft Office 2010 featured a new logo, which resembled the 2007 logo, except in gold, and with a modification in shape. Microsoft released Service Pack 1 for Office 2010 on June 28, 2011 and Service Pack 2 on July 16, 2013. Office Online was first released online along with SkyDrive, an online storing service.
Microsoft Office 2013
A technical preview of Microsoft Office 2013 (Build 15.0.3612.1010) was released on January 30, 2012, and a Customer Preview version was made available to consumers on July 16, 2012. It sports a revamped application interface; the interface is based on Metro, the interface of Windows Phone and Windows 8. Microsoft Outlook has received the most pronounced changes so far; for example, the Metro interface provides a new visualization for scheduled tasks. PowerPoint includes more templates and transition effects, and OneNote includes a new splash screen.
On May 16, 2011, new images of Office 15 were revealed, showing Excel with a tool for filtering data in a timeline, the ability to convert Roman numerals to Arabic numerals, and the integration of advanced trigonometric functions. In Word, the capability of inserting video and audio online as well as the broadcasting of documents on the Web were implemented. Microsoft has promised support for Office Open XML Strict starting with version 15, a format Microsoft has submitted to the ISO for interoperability with other office suites, and to aid adoption in the public sector. This version can read and write ODF 1.2 (Windows only).
On October 24, 2012, Office 2013 Professional Plus was released to manufacturing and was made available to TechNet and MSDN subscribers for download. On November 15, 2012, the 60-day trial version was released for public download. Office 2013 was released to general availability on January 29, 2013. Service Pack 1 for Office 2013 was released on February 25, 2014. Some applications were completely removed from the entire suite including SharePoint Workspace, Clip Organizer, and Office Picture Manager.
Microsoft Office 2016
On January 22, 2015, the Microsoft Office blog announced that the next version of the suite for Windows desktop, Office 2016, was in development. On May 4, 2015, a public preview of Microsoft Office 2016 was released. Office 2016 was released for Mac OS X on July 9, 2015 and for Windows on September 22, 2015.
Users who had the Professional Plus 2016 subscription have the new Skype for Business app. Microsoft Teams, a team collaboration program meant to rival Slack, was released as a separate product for business and enterprise users.
Microsoft Office 2019
On September 26, 2017, Microsoft announced that the next version of the suite for Windows desktop, Office 2019, was in development. On April 27, 2018, Microsoft released Office 2019 Commercial Preview for Windows 10. It was released to general availability for Windows 10 and for macOS on September 24, 2018.
Microsoft Office 2021
On February 18, 2021, Microsoft announced that the next version of the suite for Windows desktop, Office 2021, was in development. This new version will be supported for five years and was released on October 5, 2021.
Mac versions
Prior to packaging its various office-type Mac OS software applications into Office, Microsoft released Mac versions of Word 1.0 in 1984, the first year of the Macintosh computer; Excel 1.0 in 1985; and PowerPoint 1.0 in 1987. Microsoft does not include its Access database application in Office for Mac.
Microsoft has noted that some features are added to Office for Mac before they appear in Windows versions, such as Office for Mac 2001's Office Project Gallery and PowerPoint Movie feature, which allows users to save presentations as QuickTime movies. However, Microsoft Office for Mac has been long criticized for its lack of support of Unicode and for its lack of support for right-to-left languages, notably Arabic, Hebrew and Persian.
Early Office for Mac releases (1989–1994)
Microsoft Office for Mac was introduced for Mac OS in 1989, before Office was released for Windows. It included Word 4.0, Excel 2.2, PowerPoint 2.01, and Mail 1.37. It was originally a limited-time promotion but later became a regular product. With the release of Office on CD-ROM later that year, Microsoft became the first major Mac publisher to put its applications on CD-ROM.
Microsoft Office 1.5 for Mac was released in 1991 and included the updated Excel 3.0, the first application to support Apple's System 7 operating system.
Microsoft Office 3.0 for Mac was released in 1992 and included Word 5.0, Excel 4.0, PowerPoint 3.0 and Mail Client. Excel 4.0 was the first application to support new AppleScript.
Microsoft Office 4.2 for Mac was released in 1994. (Version 4.0 was skipped to synchronize version numbers with Office for Windows) Version 4.2 included Word 6.0, Excel 5.0, PowerPoint 4.0 and Mail 3.2. It was the first Office suite for Power Macintosh. Its user interface was identical to Office 4.2 for Windows leading many customers to comment that it wasn't Mac-like enough. The final release for Mac 68K was Office 4.2.1, which updated Word to version 6.0.1, somewhat improving performance.
Microsoft Office 98 Macintosh Edition
Microsoft Office 98 Macintosh Edition was unveiled at MacWorld Expo/San Francisco in 1998. It introduced the Internet Explorer 4.0 web browser and Outlook Express, an Internet e-mail client and usenet newsgroup reader. Office 98 was re-engineered by Microsoft's Macintosh Business Unit to satisfy customers' desire for software they felt was more Mac-like. It included drag–and-drop installation, self-repairing applications and Quick Thesaurus, before such features were available in Office for Windows. It also was the first version to support QuickTime movies.
Microsoft Office 2001 and v. X
Microsoft Office 2001 was launched in 2000 as the last Office suite for the classic Mac OS. It required a PowerPC processor. This version introduced Entourage, an e-mail client that included information management tools such as a calendar, an address book, task lists and notes.
Microsoft Office v. X was released in 2001 and was the first version of Microsoft Office for Mac OS X. Support for Office v. X ended on January 9, 2007, after the release of the final update, 10.1.9 Office v.X includes Word X, Excel X, PowerPoint X, Entourage X, MSN Messenger for Mac and Windows Media Player 9 for Mac; it was the last version of Office for Mac to include Internet Explorer for Mac.
Office 2004
Microsoft Office 2004 for Mac was released on May 11, 2004. It includes Microsoft Word, Excel, PowerPoint, Entourage and Virtual PC. It is the final version of Office to be built exclusively for PowerPC and to officially support G3 processors, as its sequel lists a G4, G5, or Intel processor as a requirement. It was notable for supporting Visual Basic for Applications (VBA), which is unavailable in Office 2008. This led Microsoft to extend support for Office 2004 from October 13, 2009, to January 10, 2012. VBA functionality was reintroduced in Office 2011, which is only compatible with Intel processors.
Office 2008
Microsoft Office 2008 for Mac was released on January 15, 2008. It was the only Office for Mac suite to be compiled as a universal binary, being the first to feature native Intel support and the last to feature PowerPC support for G4 and G5 processors, although the suite is unofficially compatible with G3 processors. New features include native Office Open XML file format support, which debuted in Office 2007 for Windows, and stronger Microsoft Office password protection employing AES-128 and SHA-1. Benchmarks suggested that compared to its predecessor, Office 2008 ran at similar speeds on Intel machines and slower speeds on PowerPC machines. Office 2008 also lacked Visual Basic for Applications (VBA) support, leaving it with only 15 months of additional mainstream support compared to its predecessor. Nevertheless, five months after it was released, Microsoft said that Office 2008 was "selling faster than any previous version of Office for Mac in the past 19 years" and affirmed "its commitment to future products for the Mac."
Office 2011
Microsoft Office for Mac 2011 was released on October 26, 2010,. It is the first version of Office for Mac to be compiled exclusively for Intel processors, dropping support for the PowerPC architecture. It features an OS X version of Outlook to replace the Entourage email client. This version of Outlook is intended to make the OS X version of Office work better with Microsoft's Exchange server and with those using Office for Windows. Office 2011 includes a Mac-based Ribbon similar to Office for Windows.
OneNote and Outlook release (2014)
Microsoft OneNote for Mac was released on March 17, 2014. It marks the company's first release of the note-taking software on the Mac. It is available as a free download to all users of the Mac App Store in OS X Mavericks.
Microsoft Outlook 2016 for Mac debuted on October 31, 2014. It requires a paid Office 365 subscription, meaning that traditional Office 2011 retail or volume licenses cannot activate this version of Outlook. On that day, Microsoft confirmed that it would release the next version of Office for Mac in late 2015.
Despite dropping support for older versions of OS X and only keeping support for 64-bit-only versions of OS X, these versions of OneNote and Outlook are 32-bit applications like their predecessors.
Office 2016
The first Preview version of Microsoft Office 2016 for Mac was released on March 5, 2015. On July 9, 2015, Microsoft released the final version of Microsoft Office 2016 for Mac which includes Word, Excel, PowerPoint, Outlook and OneNote. It was immediately made available for Office 365 subscribers with either a Home, Personal, Business, Business Premium, E3 or ProPlus subscription. A non–Office 365 edition of Office 2016 was made available as a one-time purchase option on September 22, 2015.
Office 2019
Mobile versions
Office Mobile for iPhone was released on June 14, 2013, in the United States. Support for 135 markets and 27 languages was rolled out over a few days. It requires iOS 8 or later. Although the app also works on iPad devices, excluding the first generation, it is designed for a small screen. Office Mobile was released for Android phones on July 31, 2013, in the United States. Support for 117 markets and 33 languages was added gradually over several weeks. It is supported on Android 4.0 and later.
Office Mobile is or was also available, though no longer supported, on Windows Mobile, Windows Phone and Symbian. There was also Office RT, a touch-optimized version of the standard desktop Office suite, pre-installed on Windows RT.
Early Office Mobile releases
Originally called Office Mobile which was shipped initially as "Pocket Office", was released by Microsoft with the Windows CE 1.0 operating system in 1996. This release was specifically for the Handheld PC hardware platform, as Windows Mobile Smartphone and Pocket PC hardware specifications had not yet been released. It consisted of Pocket Word and Pocket Excel; PowerPoint, Access, and Outlook were added later. With steady updates throughout subsequent releases of Windows Mobile, Office Mobile was rebranded as its current name after the release of the Windows Mobile 5.0 operating system. This release of Office Mobile also included PowerPoint Mobile for the first time. Accompanying the release of Microsoft OneNote 2007, a new optional addition to the Office Mobile line of programs was released as OneNote Mobile. With the release of Windows Mobile 6 Standard, Office Mobile became available for the Smartphone hardware platform, but unlike Office Mobile for the Professional and Classic versions of Windows Mobile, creation of new documents is not an added feature. A popular workaround is to create a new blank document in a desktop version of Office, synchronize it to the device, and then edit and save on the Windows Mobile device.
In June 2007, Microsoft announced a new version of the office suite, Office Mobile 2007. It became available as "Office Mobile 6.1" on September 26, 2007, as a free upgrade download to current Windows Mobile 5.0 and 6 users. However, "Office Mobile 6.1 Upgrade" is not compatible with Windows Mobile 5.0 powered devices running builds earlier than 14847. It is a pre-installed feature in subsequent releases of Windows Mobile 6 devices. Office Mobile 6.1 is compatible with the Office Open XML specification like its desktop counterpart.
On August 12, 2009, it was announced that Office Mobile would also be released for the Symbian platform as a joint agreement between Microsoft and Nokia. It was the first time Microsoft would develop Office mobile applications for another smartphone platform. The first application to appear on Nokia Eseries smartphones was Microsoft Office Communicator. In February 2012, Microsoft released OneNote, Lync 2010, Document Connection and PowerPoint Broadcast for Symbian. In April, Word Mobile, PowerPoint Mobile and Excel Mobile joined the Office Suite.
On October 21, 2010, Microsoft debuted Office Mobile 2010 with the release of Windows Phone 7. In Windows Phone, users can access and edit documents directly off of their SkyDrive or Office 365 accounts in a dedicated Office hub. The Office Hub, which is preinstalled into the operating system, contains Word, PowerPoint and Excel. The operating system also includes OneNote, although not as a part of the Office Hub. Lync is not included, but can be downloaded as standalone app from the Windows Phone Store free of charge.
In October 2012, Microsoft released a new version of Microsoft Office Mobile for Windows Phone 8 and Windows Phone 7.8.
Office for Android, iOS and Windows 10 Mobile
Office Mobile was released for iPhone on June 14, 2013, and for Android phones on July 31, 2013.
In March 2014, Microsoft released Office Lens, a scanner app that enhances photos. Photos are then attached to an Office document. Office Lens is an app in the Windows Phone store, as well as built into the camera functionality in the OneNote apps for iOS and Windows 8.
On March 27, 2014, Microsoft launched Office for iPad, the first dedicated version of Office for tablet computers. In addition, Microsoft made the Android and iOS versions of Office Mobile free for 'home use' on phones, although the company still requires an Office 365 subscription for using Office Mobile for business use. On November 6, 2014, Office was subsequently made free for personal use on the iPad in addition to phones. As part of this announcement, Microsoft also split up its single "Office suite" app on iPhones into separate, standalone apps for Word, Excel and PowerPoint, released a revamped version of Office Mobile for iPhone, added direct integration with Dropbox, and previewed future versions of Office for other platforms.
Office for Android tablets was released on January 29, 2015, following a successful two-month preview period. These apps allow users to edit and create documents for free on devices with screen sizes of 10.1 inches or less, though as with the iPad versions, an Office 365 subscription is required to unlock premium features and for commercial use of the apps. Tablets with screen sizes larger than 10.1 inches are also supported, but, as was originally the case with the iPad version, are restricted to viewing documents only unless a valid Office 365 subscription is used to enable editing and document creation.
On January 21, 2015, during the "Windows 10: The Next Chapter" press event, Microsoft unveiled Office for Windows 10, Windows Runtime ports of the Android and iOS versions of the Office Mobile suite. Optimized for smartphones and tablets, they are universal apps that can run on both Windows and Windows for phones, and share similar underlying code. A simplified version of Outlook was also added to the suite. They will be bundled with Windows 10 mobile devices, and available from the Windows Store for the PC version of Windows 10. Although the preview versions were free for most editing, the release versions will require an Office 365 subscription on larger tablets (screen size larger than 10.1 inches) and desktops for editing, as with large Android tablets. Smaller tablets and phones will have most editing features for free.
On June 24, 2015, Microsoft released Word, Excel and PowerPoint as standalone apps on Google Play for Android phones, following a one-month preview. These apps have also been bundled with Android devices from major OEMs, as a result of Microsoft tying distribution of them and Skype to patent-licensing agreements related to the Android platform. The Android version is also supported on certain Chrome OS machines.
On February 19, 2020, Microsoft announced a new unified Office mobile app for Android and iOS. This app combines Word, Excel, and PowerPoint into a single app and introduces new capabilities as making quick notes, signing PDFs, scanning QR codes, and transferring files.
Online versions
Office Web Apps was first revealed in October 2008 at PDC 2008 in Los Angeles. Chris Capossela, senior vice president of Microsoft business division, introduced Office Web Apps as lightweight versions of Word, Excel, PowerPoint and OneNote that allow people to create, edit and collaborate on Office documents through a web browser. According to Capossela, Office Web Apps was to become available as a part of Office Live Workspace. Office Web Apps was announced to be powered by AJAX as well as Silverlight; however, the latter is optional and its availability will only "enhance the user experience, resulting in sharper images and improved rendering." Microsoft's Business Division President Stephen Elop stated during PDC 2008 that "a technology preview of Office Web Apps would become available later in 2008". However, the Technical Preview of Office Web Apps was not released until 2009.
On July 13, 2009, Microsoft announced at its Worldwide Partners Conference 2009 in New Orleans that Microsoft Office 2010 reached its "Technical Preview" development milestone and features of Office Web Apps were demonstrated to the public for the first time. Additionally, Microsoft announced that Office Web Apps would be made available to consumers online and free of charge, while Microsoft Software Assurance customers will have the option of running them on premises. Office 2010 beta testers were not given access to Office Web Apps at this date, and it was announced that it would be available for testers during August 2009. However, in August 2009, a Microsoft spokesperson stated that there had been a delay in the release of Office Web Apps Technical Preview and it would not be available by the end of August.
Microsoft officially released the Technical Preview of Office Web Apps on September 17, 2009. Office Web Apps was made available to selected testers via its OneDrive (at the time Skydrive) service. The final version of Office Web Apps was made available to the public via Windows Live Office on June 7, 2010.
On October 22, 2012, Microsoft announced the release of new features including co-authoring, performance improvements and touch support.
On November 6, 2013, Microsoft announced further new features including real-time co-authoring and an Auto-Save feature in Word (replacing the save button).
In February 2014, Office Web Apps were re-branded Office Online and incorporated into other Microsoft web services, including Calendar, OneDrive, Outlook.com, and People. Microsoft had previously attempted to unify its online services suite (including Microsoft Passport, Hotmail, MSN Messenger, and later SkyDrive) under a brand known as Windows Live, first launched in 2005. However, with the impending launch of Windows 8 and its increased use of cloud services, Microsoft dropped the Windows Live brand to emphasize that these services would now be built directly into Windows and not merely be a "bolted on" add-on. Critics had criticized the Windows Live brand for having no clear vision, as it was being applied to an increasingly broad array of unrelated services. At the same time, Windows Live Hotmail was re-launched as Outlook.com (sharing its name with the Microsoft Outlook personal information manager).
In July 2019, Microsoft announced that they were retiring the "Online" branding for Office Online. The product is now Office, and may be referred to as "Office for the web" or "Office in a browser".
See also
Microsoft Azure
Microsoft Dynamics
Microsoft Power Platform
List of Microsoft software
References
External links
1989 software
Bundled products or services
Classic Mac OS software
Office suites for macOS
Office suites for Windows
Office suites
Pocket PC software
Windows Mobile Standard software
Windows Phone software |
20407 | https://en.wikipedia.org/wiki/Multicast | Multicast | In computer networking, multicast is group communication where data transmission is addressed to a group of destination computers simultaneously. Multicast can be one-to-many or many-to-many distribution. Multicast should not be confused with physical layer point-to-multipoint communication.
Group communication may either be application layer multicast or network-assisted multicast, where the latter makes it possible for the source to efficiently send to the group in a single transmission. Copies are automatically created in other network elements, such as routers, switches and cellular network base stations, but only to network segments that currently contain members of the group. Network assisted multicast may be implemented at the data link layer using one-to-many addressing and switching such as Ethernet multicast addressing, Asynchronous Transfer Mode (ATM), point-to-multipoint virtual circuits (P2MP) or InfiniBand multicast. Network-assisted multicast may also be implemented at the Internet layer using IP multicast. In IP multicast the implementation of the multicast concept occurs at the IP routing level, where routers create optimal distribution paths for datagrams sent to a multicast destination address.
Multicast is often employed in Internet Protocol (IP) applications of streaming media, such as IPTV and multipoint videoconferencing.
Ethernet
Ethernet frames with a value of 1 in the least-significant bit of the first octet of the destination address are treated as multicast frames and are flooded to all points on the network. This mechanism constitutes multicast at the data link layer. This mechanism is used by IP multicast to achieve one-to-many transmission for IP on Ethernet networks. Modern Ethernet controllers filter received packets to reduce CPU load, by looking up the hash of a multicast destination address in a table, initialized by software, which controls whether a multicast packet is dropped or fully received.
Ethernet multicast is available on all Ethernet networks. Multicasts span the broadcast domain of the network. Multiple Registration Protocol can be used to control Ethernet multicast delivery.
IP
IP multicast is a technique for one-to-many communication over an IP network. The destination nodes send Internet Group Management Protocol join and leave messages, for example in the case of IPTV when the user changes from one TV channel to another. IP multicast scales to a larger receiver population by not requiring prior knowledge of who or how many receivers there are. Multicast uses network infrastructure efficiently by requiring the source to send a packet only once, even if it needs to be delivered to a large number of receivers. The nodes in the network take care of replicating the packet to reach multiple receivers only when necessary.
The most common transport layer protocol to use multicast addressing is User Datagram Protocol (UDP). By its nature, UDP is not reliable—messages may be lost or delivered out of order. By adding loss detection and retransmission mechanisms, reliable multicast has been implemented on top of UDP or IP by various middleware products, e.g. those that implement the Real-Time Publish-Subscribe (RTPS) Protocol of the Object Management Group (OMG) Data Distribution Service (DDS) standard, as well as by special transport protocols such as Pragmatic General Multicast (PGM).
IP multicast is always available within the local subnet. Achieving IP multicast service over a wider area requires multicast routing. Many networks, including the Internet, do not support multicast routing. Multicast routing functionality is available in enterprise-grade network equipment but is typically not available until configured by a network administrator. The Internet Group Management Protocol is used to control IP multicast delivery.
Application layer
Application layer multicast overlay services are not based on IP multicast or data link layer multicast. Instead they use multiple unicast transmissions to simulate a multicast. These services are designed for application-level group communication. Internet Relay Chat (IRC) implements a single spanning tree across its overlay network for all conference groups. The lesser-known PSYC technology uses custom multicast strategies per conference. Some peer-to-peer technologies employ the multicast concept known as peercasting when distributing content to multiple recipients.
Explicit multi-unicast (Xcast) is another multicast strategy that includes addresses of all intended destinations within each packet. As such, given maximum transmission unit limitations, Xcast cannot be used for multicast groups with many destinations. The Xcast model generally assumes that stations participating in the communication are known ahead of time, so that distribution trees can be generated and resources allocated by network elements in advance of actual data traffic.
Wireless networks
Wireless communications (with exception to point-to-point radio links using directional antennas) are inherently broadcasting media. However, the communication service provided may be unicast, multicast as well as broadcast, depending on if the data is addressed to one, to a group or to all receivers in the covered network, respectively.
Television
In digital television, the concept of multicast service sometimes is used to refer to content protection by broadcast encryption, i.e. encrypted pay television content over a simplex broadcast channel only addressed to paying viewers. In this case, data is broadcast to all receivers but only addressed to a specific group.
The concept of interactive multicast, for example using IP multicast, may be used over TV broadcast networks to improve efficiency, offer more TV programs, or reduce the required spectrum. Interactive multicast implies that TV programs are sent only over transmitters where there are viewers and that only the most popular programs are transmitted. It relies on an additional interaction channel (a back-channel or return channel), where user equipment may send join and leave messages when the user changes TV channel. Interactive multicast has been suggested as an efficient transmission scheme in DVB-H and DVB-T2 terrestrial digital television systems, A similar concept is switched broadcast over cable-TV networks, where only the currently most popular content is delivered in the cable-TV network. Scalable video multicast in an application of interactive multicast, where a subset of the viewers receive additional data for high-resolution video.
TV gateways converts satellite (DVB-S, DVB-S2), cable (DVB-C, DVB-C2) and terrestrial television (DVB-T, DVB-T2) to IP for distribution using unicast and multicast in home, hospitality and enterprise applications
Another similar concept is Cell-TV, and implies TV distribution over 3G cellular networks using the network-assisted multicasting offered by the Multimedia Broadcast Multicast Service (MBMS) service, or over 4G/LTE cellular networks with the eMBMS (enhanced MBMS) service.
See also
Anycast
Any-source multicast
Content delivery network
Flooding algorithm
Mbone, experimental multicast backbone network
Multicast lightpaths
Narada multicast protocol
Non-broadcast multiple-access network
Push technology
Source-specific multicast
Broadcast, unknown-unicast and multicast traffic
References
Internet architecture
Internet broadcasting
Television terminology |
20640 | https://en.wikipedia.org/wiki/MacOS | MacOS | macOS (; previously Mac OS X and later OS X) is a proprietary graphical operating system developed and marketed by Apple Inc. since 2001. It is the primary operating system for Apple's Mac computers. Within the market of desktop and laptop computers it is the second most widely used desktop OS, after Microsoft Windows and ahead of Chrome OS.
macOS succeeded the classic Mac OS, a Macintosh operating system with nine releases from 1984 to 1999. During this time, Apple cofounder Steve Jobs had left Apple and started another company, NeXT, developing the NeXTSTEP platform that would later be acquired by Apple to form the basis of macOS.
The first desktop version, Mac OS X 10.0, was released in March 2001, with its first update, 10.1, arriving later that year. All releases from Mac OS X 10.5 Leopard and after are UNIX 03 certified, with an exception for OS X 10.7 Lion. Apple's mobile operating system, iOS, has been considered a variant of macOS.
A prominent part of macOS's original brand identity was the use of Roman numeral X, pronounced "ten" as in Mac OS X and also the iPhone X, as well as code naming each release after species of big cats, or places within California. Apple shortened the name to "OS X" in 2012 and then changed it to "macOS" in 2016 to align with the branding of Apple's other operating systems, iOS, watchOS, and tvOS. After sixteen distinct versions of macOS 10, macOS Big Sur was presented as version 11 in 2020, and macOS Monterey was presented as version 12 in 2021.
macOS has supported three major processor architectures, beginning with PowerPC-based Macs in 1999. In 2006, Apple transitioned to the Intel architecture with a line of Macs using Intel Core processors. In 2020, Apple began the Apple silicon transition, using self-designed, 64-bit ARM-based Apple M1 processors on new Mac computers.
History
Development
The heritage of what would become macOS had originated at NeXT, a company founded by Steve Jobs following his departure from Apple in 1985. There, the Unix-like NeXTSTEP operating system was developed, before being launched in 1989. The kernel of NeXTSTEP is based upon the Mach kernel, which was originally developed at Carnegie Mellon University, with additional kernel layers and low-level user space code derived from parts of BSD. Its graphical user interface was built on top of an object-oriented GUI toolkit using the programming language.
Throughout the early 1990s, Apple had tried to create a "next-generation" OS to succeed its classic Mac OS through the Taligent, Copland and Gershwin projects, but all were eventually abandoned. This led Apple to purchase NeXT in 1996, allowing NeXTSTEP, then called OPENSTEP, to serve as the basis for Apple's next generation operating system.
This purchase also led to Steve Jobs returning to Apple as an interim, and then the permanent CEO, shepherding the transformation of the programmer-friendly OPENSTEP into a system that would be adopted by Apple's primary market of home users and creative professionals. The project was first code named "Rhapsody" and then officially named Mac OS X.
Mac OS X
Mac OS X was originally presented as the tenth major version of Apple's operating system for Macintosh computers; until 2020, versions of macOS retained the major version number "10". The letter "X" in Mac OS X's name refers to the number 10, a Roman numeral, and Apple has stated that it should be pronounced "ten" in this context. However, it is also commonly pronounced like the letter "X". Previous Macintosh operating systems (versions of the classic Mac OS) were named using Arabic numerals, as with Mac OS 8 and Mac OS 9. As of 2020 and 2021, Apple reverted to Arabic numeral versioning for successive releases, macOS 11 Big Sur and macOS 12 Monterey, as they have done for the iPhone 11 and iPhone 12 following the iPhone X.
The first version of Mac OS X, Mac OS X Server 1.0, was a transitional product, featuring an interface resembling the classic Mac OS, though it was not compatible with software designed for the older system. Consumer releases of Mac OS X included more backward compatibility. Mac OS applications could be rewritten to run natively via the Carbon API; many could also be run directly through the Classic Environment with a reduction in performance.
The consumer version of Mac OS X was launched in 2001 with Mac OS X 10.0. Reviews were variable, with extensive praise for its sophisticated, glossy Aqua interface, but criticizing it for sluggish performance. With Apple's popularity at a low, the makers of several classic Mac applications such as FrameMaker and PageMaker declined to develop new versions of their software for Mac OS X. Ars Technica columnist John Siracusa, who reviewed every major OS X release up to 10.10, described the early releases in retrospect as 'dog-slow, feature poor' and Aqua as 'unbearably slow and a huge resource hog'.
Apple rapidly developed several new releases of Mac OS X. Siracusa's review of version 10.3, Panther, noted "It's strange to have gone from years of uncertainty and vaporware to a steady annual supply of major new operating system releases." Version 10.4, Tiger, reportedly shocked executives at Microsoft by offering a number of features, such as fast file searching and improved graphics processing, that Microsoft had spent several years struggling to add to Windows with acceptable performance.
As the operating system evolved, it moved away from the classic Mac OS, with applications being added and removed. Considering music to be a key market, Apple developed the iPod music player and music software for the Mac, including iTunes and GarageBand. Targeting the consumer and media markets, Apple emphasized its new "digital lifestyle" applications such as the iLife suite, integrated home entertainment through the Front Row media center and the Safari web browser. With increasing popularity of the internet, Apple offered additional online services, including the .Mac, MobileMe and most recently iCloud products. It later began selling third-party applications through the Mac App Store.
Newer versions of Mac OS X also included modifications to the general interface, moving away from the striped gloss and transparency of the initial versions. Some applications began to use a brushed metal appearance, or non-pinstriped title bar appearance in version 10.4. In Leopard, Apple announced a unification of the interface, with a standardized gray-gradient window style.
In 2006, the first Intel Macs released used a specialized version of Mac OS X 10.4 Tiger.
A key development for the system was the announcement and release of the iPhone from 2007 onwards. While Apple's previous iPod media players used a minimal operating system, the iPhone used an operating system based on Mac OS X, which would later be called "iPhone OS" and then iOS. The simultaneous release of two operating systems based on the same frameworks placed tension on Apple, which cited the iPhone as forcing it to delay Mac OS X 10.5 Leopard. However, after Apple opened the iPhone to third-party developers its commercial success drew attention to Mac OS X, with many iPhone software developers showing interest in Mac development.
In 2007, Mac OS X 10.5 Leopard was the sole release with universal binary components, allowing installation on both Intel Macs and select PowerPC Macs. It is also the final release with PowerPC Mac support. Mac OS X 10.6 Snow Leopard was the first version of OS X to be built exclusively for Intel Macs, and the final release with 32-bit Intel Mac support. The name was intended to signal its status as an iteration of Leopard, focusing on technical and performance improvements rather than user-facing features; indeed it was explicitly branded to developers as being a 'no new features' release. Since its release, several OS X or macOS releases (namely OS X Mountain Lion, OS X El Capitan, macOS High Sierra, and macOS Monterey) follow this pattern, with a name derived from its predecessor, similar to the 'tick–tock model' used by Intel.
In two succeeding versions, Lion and Mountain Lion, Apple moved some applications to a highly skeuomorphic style of design inspired by contemporary versions of iOS while simplifying some elements by making controls such as scroll bars fade out when not in use. This direction was, like brushed metal interfaces, unpopular with some users, although it continued a trend of greater animation and variety in the interface previously seen in design aspects such as the Time Machine backup utility, which presented past file versions against a swirling nebula, and the glossy translucent dock of Leopard and Snow Leopard. In addition, with Mac OS X 10.7 Lion, Apple ceased to release separate server versions of Mac OS X, selling server tools as a separate downloadable application through the Mac App Store. A review described the trend in the server products as becoming "cheaper and simpler... shifting its focus from large businesses to small ones."
OS X
In 2012, with the release of OS X 10.8 Mountain Lion, the name of the system was shortened from Mac OS X to OS X. That year, Apple removed the head of OS X development, Scott Forstall, and design was changed towards a more minimal direction. Apple's new user interface design, using deep color saturation, text-only buttons and a minimal, 'flat' interface, was debuted with iOS 7 in 2013. With OS X engineers reportedly working on iOS 7, the version released in 2013, OS X 10.9 Mavericks, was something of a transitional release, with some of the skeuomorphic design removed, while most of the general interface of Mavericks remained unchanged. The next version, OS X 10.10 Yosemite, adopted a design similar to iOS 7 but with greater complexity suitable for an interface controlled with a mouse.
From 2012 onwards, the system has shifted to an annual release schedule similar to that of iOS. It also steadily cut the cost of updates from Snow Leopard onwards, before removing upgrade fees altogether from 2013 onwards. Some journalists and third-party software developers have suggested that this decision, while allowing more rapid feature release, meant less opportunity to focus on stability, with no version of OS X recommendable for users requiring stability and performance above new features. Apple's 2015 update, OS X 10.11 El Capitan, was announced to focus specifically on stability and performance improvements.
macOS
In 2016, with the release of macOS 10.12 Sierra, the name was changed from OS X to macOS to align it with the branding of Apple's other primary operating systems: iOS, watchOS, and tvOS. macOS 10.12 Sierra's main features are the introduction of Siri to macOS, Optimized Storage, improvements to included applications, and greater integration with Apple's iPhone and Apple Watch. The Apple File System (APFS) was announced at Apple's annual Worldwide Developers Conference (WWDC) in June 2016 as a replacement for HFS+, a highly criticized file system.
Apple previewed macOS 10.13 High Sierra at WWDC 2017, before releasing it later that year. When running on solid state drives, it uses APFS, rather than HFS+. Its successor, macOS 10.14 Mojave, was released in 2018, adding a dark user interface option and a dynamic wallpaper setting. It was succeeded by macOS 10.15 Catalina in 2019, which replaces iTunes with separate apps for different types of media, and introduces the Catalyst system for porting iOS apps.
In 2020, Apple previewed macOS 11 Big Sur at the WWDC 2020. This was the first increment in the primary version number of macOS since the release of Mac OS X Public Beta in 2000; updates to macOS 11 were given 11.x numbers, matching the version numbering scheme used by Apple's other operating systems. Big Sur brought major changes to the UI and was the first version to run on the ARM instruction set. The new numbering system was continued in 2021 with macOS 12 Monterey.
Architecture
At macOS's core is a POSIX-compliant operating system built on top of the XNU kernel, with standard Unix facilities available from the command line interface. Apple has released this family of software as a free and open source operating system named Darwin. On top of Darwin, Apple layered a number of components, including the Aqua interface and the Finder, to complete the GUI-based operating system which is macOS.
With its original introduction as Mac OS X, the system brought a number of new capabilities to provide a more stable and reliable platform than its predecessor, the classic Mac OS. For example, pre-emptive multitasking and memory protection improved the system's ability to run multiple applications simultaneously without them interrupting or corrupting each other. Many aspects of macOS's architecture are derived from OPENSTEP, which was designed to be portable, to ease the transition from one platform to another. For example, NeXTSTEP was ported from the original 68k-based NeXT workstations to x86 and other architectures before NeXT was purchased by Apple, and OPENSTEP was later ported to the PowerPC architecture as part of the Rhapsody project.
Prior to macOS High Sierra, and on drives other than solid state drives (SSDs), the default file system is HFS+, which it inherited from the classic Mac OS. Operating system designer Linus Torvalds has criticized HFS+, saying it is "probably the worst file system ever", whose design is "actively corrupting user data". He criticized the case insensitivity of file names, a design made worse when Apple extended the file system to support Unicode.
The Darwin subsystem in macOS manages the file system, which includes the Unix permissions layer. In 2003 and 2005, two Macworld editors expressed criticism of the permission scheme; Ted Landau called misconfigured permissions "the most common frustration" in macOS, while Rob Griffiths suggested that some users may even have to reset permissions every day, a process which can take up to 15 minutes. More recently, another Macworld editor, Dan Frakes, called the procedure of repairing permissions vastly overused. He argues that macOS typically handles permissions properly without user interference, and resetting permissions should only be tried when problems emerge.
The architecture of macOS incorporates a layered design:
the layered frameworks aid rapid development of applications by providing existing code for common tasks. Apple provides its own software development tools, most prominently an integrated development environment called Xcode. Xcode provides interfaces to compilers that support several programming languages including C, C++, Objective-C, and Swift. For the Mac transition to Intel processors, it was modified so that developers could build their applications as a universal binary, which provides compatibility with both the Intel-based and PowerPC-based Macintosh lines. First and third-party applications can be controlled programmatically using the AppleScript framework, retained from the classic Mac OS, or using the newer Automator application that offers pre-written tasks that do not require programming knowledge.
Software compatibility
Apple offered two main APIs to develop software natively for macOS: Cocoa and Carbon. Cocoa was a descendant of APIs inherited from OPENSTEP with no ancestry from the classic Mac OS, while Carbon was an adaptation of classic Mac OS APIs, allowing Mac software to be minimally rewritten to run natively on Mac OS X.
The Cocoa API was created as the result of a 1993 collaboration between NeXT Computer and Sun Microsystems. This heritage is highly visible for Cocoa developers, since the "NS" prefix is ubiquitous in the framework, standing variously for NeXTSTEP or NeXT/Sun. The official OPENSTEP API, published in September 1994, was the first to split the API between Foundation and ApplicationKit and the first to use the "NS" prefix. Traditionally, Cocoa programs have been mostly written in Objective-C, with Java as an alternative. However, on July 11, 2005, Apple announced that "features added to Cocoa in Mac OS X versions later than 10.4 will not be added to the Cocoa-Java programming interface." macOS also used to support the Java Platform as a "preferred software package"—in practice this means that applications written in Java fit as neatly into the operating system as possible while still being cross-platform compatible, and that graphical user interfaces written in Swing look almost exactly like native Cocoa interfaces. Since 2014, Apple has promoted its new programming language Swift as the preferred language for software development on Apple platforms.
Apple's original plan with macOS was to require all developers to rewrite their software into the Cocoa APIs. This caused much outcry among existing Mac developers, who threatened to abandon the platform rather than invest in a costly rewrite, and the idea was shelved. To permit a smooth transition from Mac OS 9 to Mac OS X, the Carbon Application Programming Interface (API) was created. Applications written with Carbon were initially able to run natively on both classic Mac OS and Mac OS X, although this ability was later dropped as Mac OS X developed. Carbon was not included in the first product sold as Mac OS X: the little-used original release of Mac OS X Server 1.0, which also did not include the Aqua interface. Apple limited further development of Carbon from the release of Leopard onwards and announced that Carbon applications would not run at 64-bit. A number of macOS applications continued to use Carbon for some time afterwards, especially ones with heritage dating back to the classic Mac OS and for which updates would be difficult, uneconomic or not necessary. This included Microsoft Office up to Office 2016, and Photoshop up to CS5. Early versions of macOS could also run some classic Mac OS applications through the Classic Environment with performance limitations; this feature was removed from 10.5 onwards and all Macs using Intel processors.
Because macOS is POSIX compliant, many software packages written for the other Unix-like systems including Linux can be recompiled to run on it, including much scientific and technical software. Third-party projects such as Homebrew, Fink, MacPorts and pkgsrc provide pre-compiled or pre-formatted packages. Apple and others have provided versions of the X Window System graphical interface which can allow these applications to run with an approximation of the macOS look-and-feel. The current Apple-endorsed method is the open-source XQuartz project; earlier versions could use the X11 application provided by Apple, or before that the XDarwin project.
Applications can be distributed to Macs and installed by the user from any source and by any method such as downloading (with or without code signing, available via an Apple developer account) or through the Mac App Store, a marketplace of software maintained by Apple through a process requiring the company's approval. Apps installed through the Mac App Store run within a sandbox, restricting their ability to exchange information with other applications or modify the core operating system and its features. This has been cited as an advantage, by allowing users to install apps with confidence that they should not be able to damage their system, but also as a disadvantage due to blocking the Mac App Store's use for professional applications that require elevated privileges. Applications without any code signature cannot be run by default except from a computer's administrator account.
Apple produces macOS applications. Some are included with macOS and some sold separately. This includes iWork, Final Cut Pro, Logic Pro, iLife, and the database application FileMaker. Numerous other developers also offer software for macOS.
In 2018, Apple introduced an application layer, reportedly codenamed Marzipan, to port iOS apps to macOS. macOS Mojave included ports of four first-party iOS apps including Home and News, and it was announced that the API would be available for third-party developers to use from 2019.
Hardware compatibility
Tools such as XPostFacto and patches applied to the installation media have been developed by third parties to enable installation of newer versions of macOS on systems not officially supported by Apple. This includes a number of pre-G3 Power Macintosh systems that can be made to run up to and including Mac OS X 10.2 Jaguar, all G3-based Macs which can run up to and including Tiger, and sub-867 MHz G4 Macs can run Leopard by removing the restriction from the installation DVD or entering a command in the Mac's Open Firmware interface to tell the Leopard Installer that it has a clock rate of 867 MHz or greater. Except for features requiring specific hardware such as graphics acceleration or DVD writing, the operating system offers the same functionality on all supported hardware.
As most Mac hardware components, or components similar to those, since the Intel transition are available for purchase, some technology-capable groups have developed software to install macOS on non-Apple computers. These are referred to as Hackintoshes, a portmanteau of the words "hack" and "Macintosh". This violates Apple's EULA (and is therefore unsupported by Apple technical support, warranties etc.), but communities that cater to personal users, who do not install for resale and profit, have generally been ignored by Apple. These self-made computers allow more flexibility and customization of hardware, but at a cost of leaving the user more responsible for their own machine, such as on matter of data integrity or security. Psystar, a business that attempted to profit from selling macOS on non-Apple certified hardware, was sued by Apple in 2008.
PowerPC–Intel transition
In April 2002, eWeek announced a rumor that Apple had a version of Mac OS X code-named Marklar, which ran on Intel x86 processors. The idea behind Marklar was to keep Mac OS X running on an alternative platform should Apple become dissatisfied with the progress of the PowerPC platform. These rumors subsided until late in May 2005, when various media outlets, such as The Wall Street Journal and CNET, announced that Apple would unveil Marklar in the coming months.
On June 6, 2005, Steve Jobs announced in his keynote address at WWDC that Apple would be making the transition from PowerPC to Intel processors over the following two years, and that Mac OS X would support both platforms during the transition. Jobs also confirmed rumors that Apple had versions of Mac OS X running on Intel processors for most of its developmental life. Intel-based Macs would run a new recompiled version of OS X along with Rosetta, a binary translation layer which enables software compiled for PowerPC Mac OS X to run on Intel Mac OS X machines. The system was included with Mac OS X versions up to version 10.6.8. Apple dropped support for Classic mode on the new Intel Macs. Third party emulation software such as Mini vMac, Basilisk II and SheepShaver provided support for some early versions of Mac OS. A new version of Xcode and the underlying command-line compilers supported building universal binaries that would run on either architecture.
PowerPC-only software is supported with Apple's official emulation software, Rosetta, though applications eventually had to be rewritten to run properly on the newer versions released for Intel processors. Apple initially encouraged developers to produce universal binaries with support for both PowerPC and Intel. PowerPC binaries suffer a performance penalty when run on Intel Macs through Rosetta. Moreover, some PowerPC software, such as kernel extensions and System Preferences plugins, are not supported on Intel Macs at all. Some PowerPC applications would not run on macOS at all. Plugins for Safari need to be compiled for the same platform as Safari, so when Safari is running on Intel, it requires plug-ins that have been compiled as Intel-only or universal binaries, so PowerPC-only plug-ins will not work. While Intel Macs can run PowerPC, Intel, and universal binaries, PowerPC Macs support only universal and PowerPC builds.
Support for the PowerPC platform was dropped following the transition. In 2009, Apple announced at WWDC that Mac OS X 10.6 Snow Leopard would drop support for PowerPC processors and be Intel-only. Rosetta continued to be offered as an optional download or installation choice in Snow Leopard before it was discontinued with Mac OS X 10.7 Lion. In addition, new versions of Mac OS X first- and third-party software increasingly required Intel processors, including new versions of iLife, iWork, Aperture and Logic Pro.
Intel–ARM transition
Rumors of Apple shifting Macs to the ARM processors used by iOS devices began circulating as early as 2011, and ebbed and flowed throughout the 2010s. Rumors intensified in 2020, when numerous reports announced that the company would announce its shift to its custom processors at WWDC.
Apple officially announced its shift to processors designed in-house on June 22, 2020, at WWDC 2020, with the transition planned to last for two years. The first release of macOS to support ARM is macOS Big Sur.
The change in processor architecture allows Macs with ARM processors to be able to run natively with iOS and iPadOS apps.
Features
Aqua user interface
One of the major differences between the classic Mac OS and the current macOS was the addition of Aqua, a graphical user interface with water-like elements, in the first major release of Mac OS X. Every window element, text, graphic, or widget is drawn on-screen using spatial anti-aliasing technology. ColorSync, a technology introduced many years before, was improved and built into the core drawing engine, to provide color matching for printing and multimedia professionals. Also, drop shadows were added around windows and isolated text elements to provide a sense of depth. New interface elements were integrated, including sheets (dialog boxes attached to specific windows) and drawers, which would slide out and provide options.
The use of soft edges, translucent colors, and pinstripes, similar to the hardware design of the first iMacs, brought more texture and color to the user interface when compared to what Mac OS 9 and Mac OS X Server 1.0's "Platinum" appearance had offered. According to Siracusa, the introduction of Aqua and its departure from the then conventional look "hit like a ton of bricks."
Bruce Tognazzini (who founded the original Apple Human Interface Group) said that the Aqua interface in Mac OS X 10.0 represented a step backwards in usability compared with the original Mac OS interface.
Third-party developers started producing skins for customizable applications and other operating systems which mimicked the Aqua appearance. To some extent, Apple has used the successful transition to this new design as leverage, at various times threatening legal action against people who make or distribute software with an interface the company says is derived from its copyrighted design.
Apple has continued to change aspects of the macOS appearance and design, particularly with tweaks to the appearance of windows and the menu bar. Since 2012, Apple has sold many of its Mac models with high-resolution Retina displays, and macOS and its APIs have extensive support for resolution-independent development on supporting high-resolution displays. Reviewers have described Apple's support for the technology as superior to that on Windows.
The human interface guidelines published by Apple for macOS are followed by many applications, giving them consistent user interface and keyboard shortcuts. In addition, new services for applications are included, which include spelling and grammar checkers, special characters palette, color picker, font chooser and dictionary; these global features are present in every Cocoa application, adding consistency. The graphics system OpenGL composites windows onto the screen to allow hardware-accelerated drawing. This technology, introduced in version 10.2, is called Quartz Extreme, a component of Quartz. Quartz's internal imaging model correlates well with the Portable Document Format (PDF) imaging model, making it easy to output PDF to multiple devices. As a side result, PDF viewing and creating PDF documents from any application are built-in features. Reflecting its popularity with design users, macOS also has system support for a variety of professional video and image formats and includes an extensive pre-installed font library, featuring many prominent brand-name designs.
Components
The Finder is a file browser allowing quick access to all areas of the computer, which has been modified throughout subsequent releases of macOS. Quick Look has been part of the Finder since version 10.5. It allows for dynamic previews of files, including videos and multi-page documents without opening any other applications. Spotlight, a file searching technology which has been integrated into the Finder since version 10.4, allows rapid real-time searches of data files; mail messages; photos; and other information based on item properties (metadata) or content. macOS makes use of a Dock, which holds file and folder shortcuts as well as minimized windows.
Apple added Exposé in version 10.3 (called Mission Control since version 10.7), a feature which includes three functions to help accessibility between windows and desktop. Its functions are to instantly display all open windows as thumbnails for easy navigation to different tasks, display all open windows as thumbnails from the current application, and hide all windows to access the desktop. FileVault is optional encryption of the user's files with the 128-bit Advanced Encryption Standard (AES-128).
Features introduced in version 10.4 include Automator, an application designed to create an automatic workflow for different tasks; Dashboard, a full-screen group of small applications called desktop widgets that can be called up and dismissed in one keystroke; and Front Row, a media viewer interface accessed by the Apple Remote. Sync Services allows applications to access a centralized extensible database for various elements of user data, including calendar and contact items. The operating system then managed conflicting edits and data consistency.
All system icons are scalable up to 512×512 pixels as of version 10.5 to accommodate various places where they appear in larger size, including for example the Cover Flow view, a three-dimensional graphical user interface included with iTunes, the Finder, and other Apple products for visually skimming through files and digital media libraries via cover artwork. That version also introduced Spaces, a virtual desktop implementation which enables the user to have more than one desktop and display them in an Exposé-like interface; an automatic backup technology called Time Machine, which allows users to view and restore previous versions of files and application data; and Screen Sharing was built in for the first time.
In more recent releases, Apple has developed support for emoji characters by including the proprietary Apple Color Emoji font. Apple has also connected macOS with social networks such as Twitter and Facebook through the addition of share buttons for content such as pictures and text. Apple has brought several applications and features that originally debuted in iOS, its mobile operating system, to macOS in recent releases, notably the intelligent personal assistant Siri, which was introduced in version 10.12 of macOS.
Multilingual support
There are 39 system languages available in macOS for the user at the moment of installation; the system language is used throughout the entire operating system environment. Input methods for typing in dozens of scripts can be chosen independently of the system language. Recent updates have added increased support for Chinese characters and interconnections with popular social networks in China.
Updating methods
macOS can be updated using the Software Update preference pane in System Preferences or the softwareupdate command line utility. Until OS X 10.8 Mountain Lion, a separate Software Update application performed this functionality. In Mountain Lion and later, this was merged into the Mac App Store application, although the underlying update mechanism remains unchanged and is fundamentally different from the download mechanism used when purchasing an App Store application. In macOS 10.14 Mojave, the updating function was moved again to the Software Update preference pane.
Release history
Timeline of versions
With the exception of Mac OS X Server 1.0 and the original public beta, OS X versions were named after big cats until OS X 10.9 Mavericks, when Apple switched to using California locations. Prior to its release, Mac OS X 10.0 was code named "Cheetah" internally at Apple, and Mac OS X 10.1 was code named internally as "Puma". After the immense buzz surrounding Mac OS X 10.2, codenamed "Jaguar", Apple's product marketing began openly using the code names to promote the operating system. Mac OS X 10.3 was marketed as "Panther", Mac OS X 10.4 as "Tiger", Mac OS X 10.5 as "Leopard", Mac OS X 10.6 as "Snow Leopard", Mac OS X 10.7 as "Lion", OS X 10.8 as "Mountain Lion", and OS X 10.9 as "Mavericks".
"Panther", "Tiger" and "Leopard" are registered as trademarks of Apple, but "Cheetah", "Puma" and "Jaguar" have never been registered. Apple has also registered "Lynx" and "Cougar" as trademarks, though these were allowed to lapse. Computer retailer Tiger Direct sued Apple for its use of the name "Tiger". On May 16, 2005, a US federal court in the Southern District of Florida ruled that Apple's use did not infringe on Tiger Direct's trademark.
Mac OS X Public Beta
On September 13, 2000, Apple released a $29.95 "preview" version of Mac OS X, internally codenamed Kodiak, to gain feedback from users.
The "PB", as it was known, marked the first public availability of the Aqua interface and Apple made many changes to the UI based on customer feedback. Mac OS X Public Beta expired and ceased to function in Spring 2001.
Mac OS X 10.0 (Cheetah)
On March 24, 2001, Apple released Mac OS X 10.0 (internally codenamed Cheetah).
The initial version was slow, incomplete, and had very few applications available at launch, mostly from independent developers. While many critics suggested that the operating system was not ready for mainstream adoption, they recognized the importance of its initial launch as a base on which to improve. Simply releasing Mac OS X was received by the Macintosh community as a great accomplishment, for attempts to overhaul the Mac OS had been underway since 1996, and delayed by countless setbacks.
Mac OS X 10.1 (Puma)
Later that year on September 25, 2001, Mac OS X 10.1 (internally codenamed Puma) was released. It featured increased performance and provided missing features, such as DVD playback. Apple released 10.1 as a free upgrade CD for 10.0 users, in addition to the US$129 boxed version for people running Mac OS 9. It was discovered that the upgrade CDs were full install CDs that could be used with Mac OS 9 systems by removing a specific file; Apple later re-released the CDs in an actual stripped-down format that did not facilitate installation on such systems. On January 7, 2002, Apple announced that Mac OS X was to be the default operating system for all Macintosh products by the end of that month.
Mac OS X 10.2 Jaguar
On August 23, 2002, Apple followed up with Mac OS X 10.2 Jaguar, the first release to use its code name as part of the branding.
It brought great raw performance improvements, a sleeker look, and many powerful user-interface enhancements (over 150, according to Apple
), including Quartz Extreme for compositing graphics directly on an ATI Radeon or Nvidia GeForce2 MX AGP-based video card with at least 16 MB of VRAM, a system-wide repository for contact information in the new Address Book, and an instant messaging client named iChat. The Happy Mac which had appeared during the Mac OS startup sequence for almost 18 years was replaced with a large grey Apple logo with the introduction of Mac OS X v10.2.
Mac OS X 10.3 Panther
Mac OS X v10.3 Panther was released on October 24, 2003. It significantly improved performance and incorporated the most extensive update yet to the user interface. Panther included as many or more new features as Jaguar had the year before, including an updated Finder, incorporating a brushed-metal interface, Fast user switching, Exposé (Window manager), FileVault, Safari, iChat AV (which added video conferencing features to iChat), improved Portable Document Format (PDF) rendering and much greater Microsoft Windows interoperability. Support for some early G3 computers such as "beige" Power Macs and "WallStreet" PowerBooks was discontinued.
Mac OS X 10.4 Tiger
Mac OS X 10.4 Tiger was released on April 29, 2005. Apple stated that Tiger contained more than 200 new features. As with Panther, certain older machines were no longer supported; Tiger requires a Mac with 256 MB and a built-in FireWire port. Among the new features, Tiger introduced Spotlight, Dashboard, Smart Folders, updated Mail program with Smart Mailboxes, QuickTime 7, Safari 2, Automator, VoiceOver, Core Image and Core Video. The initial release of the Apple TV used a modified version of Tiger with a different graphical interface and fewer applications and services. On January 10, 2006, Apple released the first Intel-based Macs along with the 10.4.4 update to Tiger. This operating system functioned identically on the PowerPC-based Macs and the new Intel-based machines, with the exception of the Intel release lacking support for the Classic environment.
Mac OS X 10.5 Leopard
Mac OS X 10.5 Leopard was released on October 26, 2007. It was called by Apple "the largest update of Mac OS X". It brought more than 300 new features. Leopard supports both PowerPC- and Intel x86-based Macintosh computers; support for the G3 processor was dropped and the G4 processor required a minimum clock rate of 867 MHz, and at least 512 MB of RAM to be installed. The single DVD works for all supported Macs (including 64-bit machines). New features include a new look, an updated Finder, Time Machine, Spaces, Boot Camp pre-installed, full support for 64-bit applications (including graphical applications), new features in Mail and iChat, and a number of new security features. Leopard is an Open Brand UNIX 03 registered product on the Intel platform. It was also the first BSD-based OS to receive UNIX 03 certification. Leopard dropped support for the Classic Environment and all Classic applications. It was the final version of Mac OS X to support the PowerPC architecture.
Mac OS X 10.6 Snow Leopard
Mac OS X 10.6 Snow Leopard was released on August 28, 2009. Rather than delivering big changes to the appearance and end user functionality like the previous releases of , Snow Leopard focused on "under the hood" changes, increasing the performance, efficiency, and stability of the operating system. For most users, the most noticeable changes were: the disk space that the operating system frees up after a clean install compared to Mac OS X 10.5 Leopard, a more responsive Finder rewritten in Cocoa, faster Time Machine backups, more reliable and user-friendly disk ejects, a more powerful version of the Preview application, as well as a faster Safari web browser. Snow Leopard only supported machines with Intel CPUs, required at least 1 GB of RAM, and dropped default support for applications built for the PowerPC architecture (Rosetta could be installed as an additional component to retain support for PowerPC-only applications).
Snow Leopard also featured new 64-bit technology capable of supporting greater amounts of RAM, improved support for multi-core processors through Grand Central Dispatch, and advanced GPU performance with OpenCL.
The 10.6.6 update introduced support for the Mac App Store, Apple's digital distribution platform for macOS applications.
OS X 10.7 Lion
OS X 10.7 Lion was released on July 20, 2011. It brought developments made in Apple's iOS, such as an easily navigable display of installed applications called Launchpad and a greater use of multi-touch gestures, to the Mac. This release removed Rosetta, making it incompatible with PowerPC applications.
Changes made to the GUI include auto-hiding scrollbars that only appear when they are used, and Mission Control which unifies Exposé, Spaces, Dashboard, and full-screen applications within a single interface. Apple also made changes to applications: they resume in the same state as they were before they were closed, similar to iOS. Documents auto-save by default.
OS X 10.8 Mountain Lion
OS X 10.8 Mountain Lion was released on July 25, 2012. Following the release of Lion the previous year, it was the first of the annual rather than two-yearly updates to OS X (and later macOS), which also closely aligned with the annual iOS operating system updates. It incorporates some features seen in iOS 5, which include Game Center, support for iMessage in the new Messages messaging application, and Reminders as a to-do list app separate from iCal (which is renamed as Calendar, like the iOS app). It also includes support for storing iWork documents in iCloud. Notification Center, which makes its debut in Mountain Lion, is a desktop version similar to the one in iOS 5.0 and higher. Application pop-ups are now concentrated on the corner of the screen, and the Center itself is pulled from the right side of the screen. Mountain Lion also includes more Chinese features including support for Baidu as an option for Safari search engine, QQ, 163.com and 126.com services for Mail, Contacts and Calendar, Youku, Tudou and Sina Weibo are integrated into share sheets.
Starting with Mountain Lion, Apple software updates (including the OS) are distributed via the App Store. This updating mechanism replaced the Apple Software Update utility.
OS X 10.9 Mavericks
OS X 10.9 Mavericks was released on October 22, 2013. It was a free upgrade to all users running Snow Leopard or later with a 64-bit Intel processor. Its changes include the addition of the previously iOS-only Maps and iBooks applications, improvements to the Notification Center, enhancements to several applications, and many under-the-hood improvements.
OS X 10.10 Yosemite
OS X 10.10 Yosemite was released on October 16, 2014. It features a redesigned user interface similar to that of iOS 7, intended to feature a more minimal, text-based 'flat' design, with use of translucency effects and intensely saturated colors. Apple's showcase new feature in Yosemite is Handoff, which enables users with iPhones running iOS 8.1 or later to answer phone calls, receive and send SMS messages, and complete unfinished iPhone emails on their Mac. As of OS X 10.10.3, Photos replaced iPhoto and Aperture.
OS X 10.11 El Capitan
OS X 10.11 El Capitan was released on September 30, 2015. Similar to Mac OS X 10.6 Snow Leopard, Apple described this release as emphasizing "refinements to the Mac experience" and "improvements to system performance". Refinements include public transport built into the Maps application, GUI improvements to the Notes application, adopting San Francisco as the system font for clearer legibility, and the introduction of System Integrity Protection.
The Metal API, first introduced in iOS 8, was also included in this operating system for "all Macs since 2012". According to Apple, Metal accelerates system-level rendering by up to 50 percent, resulting in faster graphics performance for everyday apps. Metal also delivers up to 10 times faster draw call performance for more fluid experience in games and pro apps.
macOS 10.12 Sierra
macOS 10.12 Sierra was released to the public on September 20, 2016. New features include the addition of Siri, Optimized Storage, and updates to Photos, Messages, and iTunes.
macOS 10.13 High Sierra
macOS 10.13 High Sierra was released to the public on September 25, 2017. Like OS X El Capitan and OS X Mountain Lion, High Sierra is a refinement-based update having very few new features visible to the user, including updates to Safari, Photos, and Mail, among other changes.
The major change under the hood is the switch to the Apple File System, optimized for the solid-state storage used in most new Mac computers.
macOS 10.14 Mojave
macOS 10.14 Mojave was released on September 24, 2018. The update introduced a system-wide dark mode and several new apps lifted from iOS, such as Apple News. It was the first version to require a GPU that supports Metal. Mojave also changed the system software update mechanism from the App Store (where it had been since OS X Mountain Lion) to a new panel in System Preferences. App updates remain in the App Store.
macOS 10.15 Catalina
macOS 10.15 Catalina was released on October 7, 2019. Updates included enhanced voice control, and bundled apps for music, video, and podcasts that together replace the functions of iTunes, and the ability to use an iPad as an external monitor. Catalina officially dropped support for 32-bit applications.
macOS 11 Big Sur
macOS Big Sur was announced during the WWDC keynote speech on June 22, 2020, and it was made available to the general public on November 12, 2020. This is the first time the major version number of the operating system has been incremented since the Mac OS X Public Beta in 2000. It brings ARM support, new icons, and aesthetic user interface changes to the system.
macOS 12 Monterey
macOS Monterey was announced during the WWDC keynote speech on June 7, 2021 and released on October 25, 2021, introducing Universal Control (which allows input devices to be used with multiple devices simultaneously), Focus (which allows selectively limiting notifications and alerts depending on user-defined user/work modes), Shortcuts (a task automation framework previously only available on iOS and iPadOS expected to replace Automator), a redesigned Safari Web browser, and updates and improvements to FaceTime.
Reception
Usage share
As of July 2016, macOS is the second-most-active general-purpose desktop client operating system used on the World Wide Web following Microsoft Windows, with a 4.90% usage share according to statistics compiled by the Wikimedia Foundation. It is the second-most widely used desktop operating system (for web browsing), after Windows, and is estimated at approximately five times the usage of Linux (which has 1.01%). Usage share generally continues to shift away from the desktop and toward mobile operating systems such as iOS and Android.
Malware and spyware
In its earlier years, Mac OS X enjoyed a near-absence of the types of malware and spyware that have affected Microsoft Windows users. macOS has a smaller usage share compared to Windows. Worms, as well as potential vulnerabilities, were noted in 2006, which led some industry analysts and anti-virus companies to issue warnings that Apple's Mac OS X is not immune to malware. Increasing market share coincided with additional reports of a variety of attacks. In early 2011, Mac OS X experienced a large increase in malware attacks, and malware such as Mac Defender, MacProtector, and MacGuard was seen as an increasing problem for Mac users. At first, the malware installer required the user to enter the administrative password, but later versions installed without user input. Initially, Apple support staff were instructed not to assist in the removal of the malware or admit the existence of the malware issue, but as the malware spread, a support document was issued. Apple announced an OS X update to fix the problem. An estimated 100,000 users were affected. Apple releases security updates for macOS regularly, as well as signature files containing malware signatures for Xprotect, an anti-malware feature part of File Quarantine present since Mac OS X Snow Leopard.
Promotion
As a device company, Apple has mostly promoted macOS to sell Macs, with promotion of macOS updates focused on existing users, promotion at Apple Store and other retail partners, or through events for developers. In larger scale advertising campaigns, Apple specifically promoted macOS as better for handling media and other home-user applications, and comparing Mac OS X (especially versions Tiger and Leopard) with the heavy criticism Microsoft received for the long-awaited Windows Vista operating system.
See also
Dock (macOS)
Classic Mac OS (1984–2001)
Comparison of BSD operating systems
Comparison of operating systems
List of operating systems
List of Macintosh software
Macintosh operating systems
References
External links
– official site
macOS Support – official support page
1999 software
Apple Inc. operating systems
Apple Inc. software
Computer-related introductions in 1999
Mach (kernel)
X86-64 operating systems
ARM operating systems |
20901 | https://en.wikipedia.org/wiki/Malware | Malware | Malware (a portmanteau for malicious software) is any software intentionally designed to cause disruption to a computer, server, client, or computer network, leak private information, gain unauthorized access to information or systems, deprive users access to information or which unknowingly interferes with the user's computer security and privacy. By contrast, software that causes harm due to some deficiency is typically described as a software bug. Malware poses serious problems to individuals and businesses. According to Symantec’s 2018 Internet Security Threat Report (ISTR), malware variants number has increased to 669,947,865 in 2017, which is twice as many malware variants as in 2016.
Many types of malware exist, including computer viruses, worms, Trojan horses, ransomware, spyware, adware, rogue software, wiper, and scareware. The defense strategies against malware differs according to the type of malware but most can be thwarted by installing antivirus software, firewalls, applying regular patches to reduce zero-day attacks, securing networks from intrusion, having regular backups and isolating infected systems. Malware is now being designed to evade antivirus software detection algorithms.
History
The notion of a self-reproducing computer program can be traced back to initial theories about the operation of complex automata. John von Neumann showed that in theory a program could reproduce itself. This constituted a plausibility result in computability theory. Fred Cohen experimented with computer viruses and confirmed Neumann's postulate and investigated other properties of malware such as detectability and self-obfuscation using rudimentary encryption. His 1987 doctoral dissertation was on the subject of computer viruses. The combination of cryptographic technology as part of the payload of the virus, exploiting it for attack purposes was initialized and investigated from the mid 1990s, and includes initial ransomware and evasion ideas.
Before Internet access became widespread, viruses spread on personal computers by infecting executable programs or boot sectors of floppy disks. By inserting a copy of itself into the machine code instructions in these programs or boot sectors, a virus causes itself to be run whenever the program is run or the disk is booted. Early computer viruses were written for the Apple II and Macintosh, but they became more widespread with the dominance of the IBM PC and MS-DOS system. The first IBM PC virus in the "wild" was a boot sector virus dubbed (c)Brain, created in 1986 by the Farooq Alvi brothers in Pakistan. Malware distributors would trick the user into booting or running from an infected device or medium. For example, a virus could make an infected computer add autorunnable code to any USB stick plugged into it. Anyone who then attached the stick to another computer set to autorun from USB would in turn become infected, and also pass on the infection in the same way.
Older email software would automatically open HTML email containing potentially malicious JavaScript code. Users may also execute disguised malicious email attachments. The 2018 Data Breach Investigations Report by Verizon, cited by CSO Online, states that emails are the primary method of malware delivery, accounting for 92% of malware delivery around the world.
The first worms, network-borne infectious programs, originated not on personal computers, but on multitasking Unix systems. The first well-known worm was the Internet Worm of 1988, which infected SunOS and VAX BSD systems. Unlike a virus, this worm did not insert itself into other programs. Instead, it exploited security holes (vulnerabilities) in network server programs and started itself running as a separate process. This same behavior is used by today's worms as well.
With the rise of the Microsoft Windows platform in the 1990s, and the flexible macros of its applications, it became possible to write infectious code in the macro language of Microsoft Word and similar programs. These macro viruses infect documents and templates rather than applications (executables), but rely on the fact that macros in a Word document are a form of executable code.
Many early infectious programs, including the Morris Worm, the first internet worm, were written as experiments or pranks. Today, malware is used by both black hat hackers and governments to steal personal, financial, or business information. Today, any device that plugs into a USB port – even lights, fans, speakers, toys, or peripherals such as a digital microscope – can be used to spread malware. Devices can be infected during manufacturing or supply if quality control is inadequate.
Purposes
Malware is sometimes used broadly against government or corporate websites to gather guarded information, or to disrupt their operation in general. However, malware can be used against individuals to gain information such as personal identification numbers or details, bank or credit card numbers, and passwords.
Since the rise of widespread broadband Internet access, malicious software has more frequently been designed for profit. Since 2003, the majority of widespread viruses and worms have been designed to take control of users' computers for illicit purposes. Infected "zombie computers" can be used to send email spam, to host contraband data such as child pornography, or to engage in distributed denial-of-service attacks as a form of extortion.
Programs designed to monitor users' web browsing, display unsolicited advertisements, or redirect affiliate marketing revenues are called spyware. Spyware programs do not spread like viruses; instead they are generally installed by exploiting security holes. They can also be hidden and packaged together with unrelated user-installed software. The Sony BMG rootkit was intended to prevent illicit copying; but also reported on users' listening habits, and unintentionally created extra security vulnerabilities.
Ransomware prevents a user from accessing their files until a ransom is paid. There are two variations of ransomware, being crypto ransomware and locker ransomware. Locker ransomware just locks down a computer system without encrypting its contents, whereas crypto ransomware locks down a system and encrypts its contents. For example, programs such as CryptoLocker encrypt files securely, and only decrypt them on payment of a substantial sum of money.
Some malware is used to generate money by click fraud, making it appear that the computer user has clicked an advertising link on a site, generating a payment from the advertiser. It was estimated in 2012 that about 60 to 70% of all active malware used some kind of click fraud, and 22% of all ad-clicks were fraudulent.
In addition to criminal money-making, malware can be used for sabotage, often for political motives. Stuxnet, for example, was designed to disrupt very specific industrial equipment. There have been politically motivated attacks which spread over and shut down large computer networks, including massive deletion of files and corruption of master boot records, described as "computer killing." Such attacks were made on Sony Pictures Entertainment (25 November 2014, using malware known as Shamoon or W32.Disttrack) and Saudi Aramco (August 2012).
Types
These categories are not mutually exclusive, so malware may use multiple techniques.
Trojan horse
A Trojan horse is a harmful program that misrepresents itself to masquerade as a regular, benign program or utility in order to persuade a victim to install it. A Trojan horse usually carries a hidden destructive function that is activated when the application is started. The term is derived from the Ancient Greek story of the Trojan horse used to invade the city of Troy by stealth.
Trojan horses are generally spread by some form of social engineering, for example, where a user is duped into executing an email attachment disguised to be unsuspicious, (e.g., a routine form to be filled in), or by drive-by download. Although their payload can be anything, many modern forms act as a backdoor, contacting a controller (phoning home) which can then have unauthorized access to the affected computer, potentially installing additional software such as a keylogger to steal confidential information, cryptomining software or adware to generate revenue to the operator of the trojan. While Trojan horses and backdoors are not easily detectable by themselves, computers may appear to run slower, emit more heat or fan noise due to heavy processor or network usage, as may occur when cryptomining software is installed. Cryptominers may limit resource usage and/or only run during idle times in an attempt to evade detection.
Unlike computer viruses and worms, Trojan horses generally do not attempt to inject themselves into other files or otherwise propagate themselves.
In spring 2017 Mac users were hit by the new version of Proton Remote Access Trojan (RAT) trained to extract password data from various sources, such as browser auto-fill data, the Mac-OS keychain, and password vaults.
Rootkits
Once malicious software is installed on a system, it is essential that it stays concealed, to avoid detection. Software packages known as rootkits allow this concealment, by modifying the host's operating system so that the malware is hidden from the user. Rootkits can prevent a harmful process from being visible in the system's list of processes, or keep its files from being read.
Some types of harmful software contain routines to evade identification and/or removal attempts, not merely to hide themselves. An early example of this behavior is recorded in the Jargon File tale of a pair of programs infesting a Xerox CP-V time sharing system:
Backdoors
A backdoor is a method of bypassing normal authentication procedures, usually over a connection to a network such as the Internet. Once a system has been compromised, one or more backdoors may be installed in order to allow access in the future, invisibly to the user.
The idea has often been suggested that computer manufacturers preinstall backdoors on their systems to provide technical support for customers, but this has never been reliably verified. It was reported in 2014 that US government agencies had been diverting computers purchased by those considered "targets" to secret workshops where software or hardware permitting remote access by the agency was installed, considered to be among the most productive operations to obtain access to networks around the world. Backdoors may be installed by Trojan horses, worms, implants, or other methods.
Infectious Malware
The best-known types of malware, viruses and worms, are known for the manner in which they spread, rather than any specific types of behavior and have been likened to biological viruses.
Worm
A worm is a stand-alone malware software that transmits itself over a network to infect other computers and can copy itself without infecting files. These definitions lead to the observation that a virus requires the user to run an infected software or operating system for the virus to spread, whereas a worm spreads itself.
Virus
A computer virus is software usually hidden within another seemingly innocuous program that can produce copies of itself and insert them into other programs or files, and that usually performs a harmful action (such as destroying data). An example of this is a portable execution infection, a technique, usually used to spread malware, that inserts extra data or executable code into PE files. A computer virus is software that embeds itself in some other executable software (including the operating system itself) on the target system without the user's knowledge and consent and when it is run, the virus is spread to other executable files.
Ransomware
Screen-locking ransomware
Lock-screens, or screen lockers is a type of “cyber police” ransomware that blocks screens on Windows or Android devices with a false accusation in harvesting illegal content, trying to scare the victims into paying up a fee.
Jisut and SLocker impact Android devices more than other lock-screens, with Jisut making up nearly 60 percent of all Android ransomware detections.
Encryption-based ransomware
Encryption-based ransomware, like the name suggests, is a type of ransomware that encrypts all files on an infected machine. These types of malware then display a pop-up informing the user that their files have been encrypted and that they must pay (usually in Bitcoin) to recover them. Some examples of encryption-based ransomware are CryptoLocker and WannaCry.
Grayware
Grayware (sometimes spelled as greyware) is a term, coming into use around 2004, that applies to any unwanted application or file that can worsen the performance of computers and may cause security risks but which is not typically considered malware. Greyware are applications that behave in an annoying or undesirable manner, and yet are less serious or troublesome than malware. Grayware encompasses spyware, adware, fraudulent dialers, joke programs ("jokeware"), remote access tools and other unwanted programs that may harm the performance of computers or cause inconvenience. For example, at one point, Sony BMG compact discs silently installed a rootkit on purchasers' computers with the intention of preventing illicit copying.
Potentially Unwanted Program (PUP)
Potentially unwanted programs (PUPs) or potentially unwanted applications (PUAs) are applications that would be considered unwanted despite being downloaded often by the user, possibly after failing to read a download agreement. PUPs include spyware, adware, and fraudulent dialers. Many security products classify unauthorised key generators as grayware, although they frequently carry true malware in addition to their ostensible purpose. Malwarebytes lists several criteria for classifying a program as a PUP. Some types of adware (using stolen certificates) turn off anti-malware and virus protection; technical remedies are available.
Evasion
Since the beginning of 2015, a sizable portion of malware has been utilizing a combination of many techniques designed to avoid detection and analysis. From the more common, to the least common:
evasion of analysis and detection by fingerprinting the environment when executed.
confusing automated tools' detection methods. This allows malware to avoid detection by technologies such as signature-based antivirus software by changing the server used by the malware.
timing-based evasion. This is when malware runs at certain times or following certain actions taken by the user, so it executes during certain vulnerable periods, such as during the boot process, while remaining dormant the rest of the time.
obfuscating internal data so that automated tools do not detect the malware.
An increasingly common technique (2015) is adware that uses stolen certificates to disable anti-malware and virus protection; technical remedies are available to deal with the adware.
Nowadays, one of the most sophisticated and stealthy ways of evasion is to use information hiding techniques, namely stegomalware. A survey on stegomalware was published by Cabaj et al. in 2018.
Another type of evasion technique is Fileless malware or Advanced Volatile Threats (AVTs). Fileless malware does not require a file to operate. It runs within memory and utilizes existing system tools to carry out malicious acts. Because there are no files on the system, there are no executable files for antivirus and forensic tools to analyze, making such malware nearly impossible to detect. The only way to detect fileless malware is to catch it operating in real time. Recently these types of attacks have become more frequent with a 432% increase in 2017 and makeup 35% of the attacks in 2018. Such attacks are not easy to perform but are becoming more prevalent with the help of exploit-kits.
Risks
Vulnerable software
A vulnerability is a weakness, flaw or software bug in an application, a complete computer, an operating system, or a computer network that is exploited by malware to bypass defences or gain privileges it requires to run. For example, TestDisk 6.4 or earlier contained a vulnerability that allowed attackers to inject code into Windows. Malware can exploit security defects (security bugs or vulnerabilities) in the operating system, applications (such as browsers, e.g. older versions of Microsoft Internet Explorer supported by Windows XP), or in vulnerable versions of browser plugins such as Adobe Flash Player, Adobe Acrobat or Reader, or Java SE. For example, a common method is exploitation of a buffer overrun vulnerability, where software designed to store data in a specified region of memory does not prevent more data than the buffer can accommodate being supplied. Malware may provide data that overflows the buffer, with malicious executable code or data after the end; when this payload is accessed it does what the attacker, not the legitimate software, determines.
Malware can exploit recently discovered vulnerabilities before developers have had time to release a suitable patch. Even when new patches addressing the vulnerability have been released, they may not necessarily be installed immediately, allowing malware to take advantage of systems lacking patches. Sometimes even applying patches or installing new versions does not automatically uninstall the old versions. Security advisories from plug-in providers announce security-related updates. Common vulnerabilities are assigned CVE IDs and listed in the US National Vulnerability Database. Secunia PSI is an example of software, free for personal use, that will check a PC for vulnerable out-of-date software, and attempt to update it. Other approaches involve using firewalls and intrusion prevention systems to monitor unusual traffic patterns on the local computer network.
Excessive privileges
Users and programs can be assigned more privileges than they require, and malware can take advantage of this. For example, of 940 Android apps sampled, one third of them asked for more privileges than they required. Apps targeting the Android platform can be a major source of malware infection but one solution is to use third party software to detect apps that have been assigned excessive privileges.
Some systems allow all users to modify their internal structures, and such users today would be considered over-privileged users. This was the standard operating procedure for early microcomputer and home computer systems, where there was no distinction between an administrator or root, and a regular user of the system. In some systems, non-administrator users are over-privileged by design, in the sense that they are allowed to modify internal structures of the system. In some environments, users are over-privileged because they have been inappropriately granted administrator or equivalent status. This can be because users tend to demand more privileges than they need, so often end up being assigned unnecessary privileges.
Some systems allow code executed by a user to access all rights of that user, which is known as over-privileged code. This was also standard operating procedure for early microcomputer and home computer systems. Malware, running as over-privileged code, can use this privilege to subvert the system. Almost all currently popular operating systems, and also many scripting applications allow code too many privileges, usually in the sense that when a user executes code, the system allows that code all rights of that user.
Weak passwords
A credential attack occurs when a user account with administrative privileges is cracked and that account is used to provide malware with appropriate privileges. Typically, the attack succeeds because the weakest form of account security is used, which is typically a short password that can be cracked using a dictionary or brute force attack. Using strong passwords and enabling two-factor authentication can reduce this risk. With the latter enabled, even if an attacker can crack the password, they cannot use the account without also having the token possessed by the legitimate user of that account.
Use of the same operating system
Homogeneity can be a vulnerability. For example, when all computers in a network run the same operating system, upon exploiting one, one worm can exploit them all: In particular, Microsoft Windows or Mac OS X have such a large share of the market that an exploited vulnerability concentrating on either operating system could subvert a large number of systems. It is estimated that approximately 83% of malware infections between January and March 2020 were spread via systems running Windows 10. This risk is mitigated by segmenting the networks into different subnetworks and setting up firewalls to block traffic between them.
Mitigation
Antivirus / Anti-malware software
Anti-malware (sometimes also called antivirus) programs block and remove some or all types of malware. For example, Microsoft Security Essentials (for Windows XP, Vista, and Windows 7) and Windows Defender (for Windows 8, 10 and 11) provides real-time protection. The Windows Malicious Software Removal Tool removes malicious software from the system. Additionally, several capable antivirus software programs are available for free download from the Internet (usually restricted to non-commercial use). Tests found some free programs to be competitive with commercial ones.
Typically, antivirus software can combat malware in the following ways:
Real-time protection: They can provide real time protection against the installation of malware software on a computer. This type of malware protection works the same way as that of antivirus protection in that the anti-malware software scans all incoming network data for malware and blocks any threats it comes across.
Removal: Anti-malware software programs can be used solely for detection and removal of malware software that has already been installed onto a computer. This type of anti-malware software scans the contents of the Windows registry, operating system files, and installed programs on a computer and will provide a list of any threats found, allowing the user to choose which files to delete or keep, or to compare this list to a list of known malware components, removing files that match.
Sandboxing: Provide sandboxing of apps considered dangerous (such as web browsers where most vulnerabilities are likely to be installed from).
Real-time protection
A specific component of anti-malware software, commonly referred to as an on-access or real-time scanner, hooks deep into the operating system's core or kernel and functions in a manner similar to how certain malware itself would attempt to operate, though with the user's informed permission for protecting the system. Any time the operating system accesses a file, the on-access scanner checks if the file infected or not. Typically, when an infected file is found, execution is stopped and the file is quarantined to prevent further damage with the intention to prevent irreversible system damage. Most AVs allow users to override this behaviour. This can have a considerable performance impact on the operating system, though the degree of impact is dependent on how many pages it creates in virtual memory.
Sandboxing
Because many malware components are installed as a result of browser exploits or user error, using security software (some of which are anti-malware, though many are not) to "sandbox" browsers (essentially isolate the browser from the computer and hence any malware induced change) can also be effective in helping to restrict any damage done.
Website security scans
Website vulnerability scans check the website, detect malware, may note outdated software, and may report known security issues, in order to reduce the risk of the site being compromised.
Network Segregation
Structuring a network as a set of smaller networks, and limiting the flow of traffic between them to that known to be legitimate, can hinder the ability of infectious malware to replicate itself across the wider network. Software Defined Networking provides techniques to implement such controls.
"Air gap" isolation or "parallel network"
As a last resort, computers can be protected from malware, and the risk of infected computers disseminating trusted information can be greatly reduced by imposing an "air gap" (i.e. completely disconnecting them from all other networks) and applying enhanced controls over the entry and exit of software and data from the outside world. However, malware can still cross the air gap in some situations, not least due to the need to introduce software into the air-gapped network and can damage the availability or integrity of assets thereon. Stuxnet is an example of malware that is introduced to the target environment via a USB drive, causing damage to processes supported on the environment without the need to exfiltrate data.
AirHopper, BitWhisper, GSMem and Fansmitter are four techniques introduced by researchers that can leak data from air-gapped computers using electromagnetic, thermal and acoustic emissions.
See also
Botnet
Browser hijacking
Comparison of antivirus software
Computer security
Cuckoo's egg (metaphor)
Cyber spying
Domain generation algorithm
Facebook malware
File binder
Identity theft
Industrial espionage
Linux malware
Malvertising
Phishing
Riskware
Security in Web apps
Social engineering (security)
Targeted threat
Technical support scam
Telemetry software
Typosquatting
Web server overload causes
Webattacker
Zombie (computer science)
References
External links
Further Reading: Research Papers and Documents about Malware on IDMARCH (Int. Digital Media Archive)
Advanced Malware Cleaning – a Microsoft video
Security breaches
Computer programming
Cybercrime |
20935 | https://en.wikipedia.org/wiki/Microsoft%20Access | Microsoft Access | Microsoft Access is a database management system (DBMS) from Microsoft that combines the relational Access Database Engine (ACE) with a graphical user interface and software-development tools. It is a member of the Microsoft 365 suite of applications, included in the Professional and higher editions or sold separately.
Microsoft Access stores data in its own format based on the Access Database Engine (formerly Jet Database Engine). It can also import or link directly to data stored in other applications and databases.
Software developers, data architects and power users can use Microsoft Access to develop application software. Like other Microsoft Office applications, Access is supported by Visual Basic for Applications (VBA), an object-based programming language that can reference a variety of objects including the legacy DAO (Data Access Objects), ActiveX Data Objects, and many other ActiveX components. Visual objects used in forms and reports expose their methods and properties in the VBA programming environment, and VBA code modules may declare and call Windows operating system operations.
History
Prior to the introduction of Access, Borland (with Paradox and dBase) and Fox (with FoxPro) dominated the desktop database market. Microsoft Access was the first mass-market database program for Windows. With Microsoft's purchase of FoxPro in 1992 and the incorporation of Fox's Rushmore query optimization routines into Access, Microsoft Access quickly became the dominant database for Windows—effectively eliminating the competition which failed to transition from the MS-DOS world.
Project Omega
Microsoft's first attempt to sell a relational database product was during the mid 1980s, when Microsoft obtained the license to sell R:Base. In the late 1980s Microsoft developed its own solution codenamed Omega. It was confirmed in 1988 that a database product for Windows and OS/2 was in development. It was going to include the "EB" Embedded Basic language, which was going to be the language for writing macros in all Microsoft applications, but the unification of macro languages did not happen until the introduction of Visual Basic for Applications (VBA). Omega was also expected to provide a front end to the Microsoft SQL Server. The application was very resource-hungry, and there were reports that it was working slowly on the 386 processors that were available at the time. It was scheduled to be released in the 1st quarter of 1990, but in 1989 the development of the product was reset and it was rescheduled to be delivered no sooner than in January 1991. Parts of the project were later used for other Microsoft projects: Cirrus (codename for Access) and Thunder (codename for Visual Basic, where the Embedded Basic engine was used). After Access's premiere, the Omega project was demonstrated in 1992 to several journalists and included features that were not available in Access.
Project Cirrus
After the Omega project was scrapped, some of its developers were assigned to the Cirrus project (most were assigned to the team which created Visual Basic). Its goal was to create a competitor for applications like Paradox or dBase that would work on Windows. After Microsoft acquired FoxPro, there were rumors that the Microsoft project might get replaced with it, but the company decided to develop them in parallel. It was assumed that the project would make use of Extensible Storage Engine (Jet Blue) but, in the end, only support for Jet Database Engine (Jet Red) was provided. The project used some of the code from both the Omega project and a pre-release version of Visual Basic. In July 1992, betas of Cirrus shipped to developers and the name Access became the official name of the product. "Access" was originally used for an older terminal emulation program from Microsoft. Years after the program was abandoned, they decided to reuse the name here.
Timeline
1992: Microsoft released Access version 1.0 on November 13, 1992, and an Access 1.1 release in May 1993 to improve compatibility with other Microsoft products and to include the Access Basic programming language.
1994: Microsoft specified the minimum hardware requirements for Access v2.0 as: Microsoft Windows v3.1 with 4 MB of RAM required, 6 MB RAM recommended; 8 MB of available hard disk space required, 14 MB hard disk space recommended. The product shipped on seven 1.44 MB diskettes. The manual shows a 1994 copyright date.
As a part of the Microsoft Office 4.3 Professional with Book Shelf, Microsoft Access 2.0 was included with first sample databases "NorthWind Trader" which covered every possible aspect of programming your own database. The Northwind Traders sample first introduced the Main Switchboard features new to Access 2.0 for 1994.
The photo of Andrew Fuller, record #2 of that sample database was the individual that presented and worked with Microsoft to provide such an outstanding example database.
With Office 95, Microsoft Access 7.0 (a.k.a. "Access 95") became part of the Microsoft Office Professional Suite, joining Microsoft Excel, Word, and PowerPoint and transitioning from Access Basic to VBA. Since then, Microsoft has released new versions of Microsoft Access with each release of Microsoft Office. This includes Access 97 (version 8.0), Access 2000 (version 9.0), Access 2002 (version 10.0), Access 2003 (version 11.5), Access 2007 (version 12.0), Access 2010 (version 14.0), and Access 2013 (version 15.0).
Versions 3.0 and 3.5 of Jet Database Engine (used by Access 7.0 and the later-released Access 97 respectively) had a critical issue which made these versions of Access unusable on a computer with more than 1 GB of memory. While Microsoft fixed this problem for Jet 3.5/Access 97 post-release, it never fixed the issue with Jet 3.0/Access 95.
The native Access database format (the Jet MDB Database) has also evolved over the years. Formats include Access 1.0, 1.1, 2.0, 7.0, 97, 2000, 2002, and 2007. The most significant transition was from the Access 97 to the Access 2000 format; which is not backward compatible with earlier versions of Access. all newer versions of Access support the Access 2000 format. New features were added to the Access 2002 format which can be used by Access 2002, 2003, 2007, and 2010.
Microsoft Access 2000 increased the maximum database size to 2 GB from 1 GB in Access 97.
Microsoft Access 2007 introduced a new database format: ACCDB. It supports links to SharePoint lists and complex data types such as multivalue and attachment fields. These new field types are essentially recordsets in fields and allow the storage of multiple values or files in one field. Microsoft Access 2007 also introduced File Attachment field, which stored data more efficiently than the OLE (Object Linking and Embedding) field.
Microsoft Access 2010 introduced a new version of the ACCDB format supported hosting Access Web services on a SharePoint 2010 server. For the first time, this allowed Access applications to be run without having to install Access on their PC and was the first support of Mac users. Any user on the SharePoint site with sufficient rights could use the Access Web service. A copy of Access was still required for the developer to create the Access Web service, and the desktop version of Access remained part of Access 2010. The Access Web services were not the same as the desktop applications. Automation was only through the macro language (not VBA) which Access automatically converted to JavaScript. The data was no longer in an Access database but SharePoint lists. An Access desktop database could link to the SharePoint data, so hybrid applications were possible so that SharePoint users needing basic views and edits could be supported while the more sophisticated, traditional applications could remain in the desktop Access database.
Microsoft Access 2013 offers traditional Access desktop applications plus a significantly updated SharePoint 2013 web service. The Access Web model in Access 2010 was replaced by a new architecture that stores its data in actual SQL Server databases. Unlike SharePoint lists, this offers true relational database design with referential integrity, scalability, extensibility and performance one would expect from SQL Server. The database solutions that can be created on SharePoint 2013 offer a modern user interface designed to display multiple levels of relationships that can be viewed and edited, along with resizing for different devices and support for touch. The Access 2013 desktop is similar to Access 2010 but several features were discontinued including support for Access Data Projects (ADPs), pivot tables, pivot charts, Access data collections, source code control, replication, and other legacy features. Access desktop database maximum size remained 2 GB (as it has been since the 2000 version).
Uses
In addition to using its own database storage file, Microsoft Access also may be used as the 'front-end' of a program while other products act as the 'back-end' tables, such as Microsoft SQL Server and non-Microsoft products such as Oracle and Sybase. Multiple backend sources can be used by a Microsoft Access Jet Database (ACCDB and MDB formats). Similarly, some applications such as Visual Basic, ASP.NET, or Visual Studio .NET will use the Microsoft Access database format for its tables and queries. Microsoft Access may also be part of a more complex solution, where it may be integrated with other technologies such as Microsoft Excel, Microsoft Outlook, Microsoft Word, Microsoft PowerPoint and ActiveX controls.
Access tables support a variety of standard field types, indices, and referential integrity including cascading updates and deletes. Access also includes a query interface, forms to display and enter data, and reports for printing. The underlying Access database, which contains these objects, is multi-user and handles record-locking.
Repetitive tasks can be automated through macros with point-and-click options. It is also easy to place a database on a network and have multiple users share and update data without overwriting each other's work. Data is locked at the record level which is significantly different from Excel which locks the entire spreadsheet.
There are template databases within the program and for download from Microsoft's website. These options are available upon starting Access and allow users to enhance a database with predefined tables, queries, forms, reports, and macros. Database templates support VBA code but Microsoft's templates do not include VBA code.
Programmers can create solutions using VBA, which is similar to Visual Basic 6.0 (VB6) and used throughout the Microsoft Office programs such as Excel, Word, Outlook and PowerPoint. Most VB6 code, including the use of Windows API calls, can be used in VBA. Power users and developers can extend basic end-user solutions to a professional solution with advanced automation, data validation, error trapping, and multi-user support.
The number of simultaneous users that can be supported depends on the amount of data, the tasks being performed, level of use, and application design. Generally accepted limits are solutions with 1 GB or less of data (Access supports up to 2 GB) and it performs quite well with 100 or fewer simultaneous connections (255 concurrent users are supported). This capability is often a good fit for department solutions. If using an Access database solution in a multi-user scenario, the application should be "split". This means that the tables are in one file called the back end (typically stored on a shared network folder) and the application components (forms, reports, queries, code, macros, linked tables) are in another file called the front end. The linked tables in the front end point to the back end file. Each user of the Access application would then receive his or her own copy of the front end file.
Applications that run complex queries or analysis across large datasets would naturally require greater bandwidth and memory. Microsoft Access is designed to scale to support more data and users by linking to multiple Access databases or using a back-end database like Microsoft SQL Server. With the latter design, the amount of data and users can scale to enterprise-level solutions.
Microsoft Access's role in web development prior to version 2010 is limited. User interface features of Access, such as forms and reports, only work in Windows. In versions 2000 through 2003 an Access object type called Data Access Pages created publishable web pages. Data Access Pages are no longer supported. The Jet Database Engine, core to Access, can be accessed through technologies such as ODBC or OLE DB. The data (i.e., tables and queries) can be accessed by web-based applications developed in ASP.NET, PHP, or Java. With the use of Microsoft's Terminal Services and Remote Desktop Application in Windows Server 2008 R2, organizations can host Access applications so they can be run over the web. This technique does not scale the way a web application would but is appropriate for a limited number of users depending on the configuration of the host.
Access 2010 allows databases to be published to SharePoint 2010 web sites running Access Services. These web-based forms and reports run in any modern web browser. The resulting web forms and reports, when accessed via a web browser, don't require any add-ins or extensions (e.g. ActiveX, Silverlight).
Access 2013 can create web applications directly in SharePoint 2013 sites running Access Services. Access 2013 web solutions store its data in an underlying SQL Server database which is much more scalable and robust than the Access 2010 version which used SharePoint lists to store its data.
Access Services in SharePoint has since been retired.
A compiled version of an Access database (File extensions: .MDE /ACCDE or .ADE; ACCDE only works with Access 2007 or later) can be created to prevent users from accessing the design surfaces to modify module code, forms, and reports. An MDE or ADE file is a Microsoft Access database file with all modules compiled and all editable source code removed. Both the .MDE and .ADE versions of an Access database are used when end-user modifications are not allowed or when the application's source code should be kept confidential.
Microsoft also offers developer extensions for download to help distribute Access 2007 applications, create database templates, and integrate source code control with Microsoft Visual SourceSafe.
Features
Users can create tables, queries, forms and reports, and connect them together with macros. Advanced users can use VBA to write rich solutions with advanced data manipulation and user control. Access also has report creation features that can work with any data source that Access can access.
The original concept of Access was for end users to be able to access data from any source. Other features include: the import and export of data to many formats including Excel, Outlook, ASCII, dBase, Paradox, FoxPro, SQL Server and Oracle. It also has the ability to link to data in its existing location and use it for viewing, querying, editing, and reporting. This allows the existing data to change while ensuring that Access uses the latest data. It can perform heterogeneous joins between data sets stored across different platforms. Access is often used by people downloading data from enterprise level databases for manipulation, analysis, and reporting locally.
There is also the Access Database (ACE and formerly Jet) format (MDB or ACCDB in Access 2007) which can contain the application and data in one file. This makes it very convenient to distribute the entire application to another user, who can run it in disconnected environments.
One of the benefits of Access from a programmer's perspective is its relative compatibility with SQL (structured query language)—queries can be viewed graphically or edited as SQL statements, and SQL statements can be used directly in Macros and VBA Modules to manipulate Access tables. Users can mix and use both VBA and "Macros" for programming forms and logic and offers object-oriented possibilities. VBA can also be included in queries.
Microsoft Access offers parameterized queries. These queries and Access tables can be referenced from other programs like VB6 and .NET through DAO or ADO. From Microsoft Access, VBA can reference parameterized stored procedures via ADO.
The desktop editions of Microsoft SQL Server can be used with Access as an alternative to the Jet Database Engine. This support started with MSDE (Microsoft SQL Server Desktop Engine), a scaled down version of Microsoft SQL Server 2000, and continues with the SQL Server Express versions of SQL Server 2005 and 2008.
Microsoft Access is a file server-based database. Unlike client–server relational database management systems (RDBMS), Microsoft Access does not implement database triggers, stored procedures, or transaction logging. Access 2010 includes table-level triggers and stored procedures built into the ACE data engine. Thus a Client-server database system is not a requirement for using stored procedures or table triggers with Access 2010.
Tables, queries, forms, reports and macros can now be developed specifically for web based applications in Access 2010. Integration with Microsoft SharePoint 2010 is also highly improved.
The 2013 edition of Microsoft Access introduced a mostly flat design and the ability to install apps from the Office Store, but it did not introduce new features. The theme was partially updated again for 2016, but no dark theme was created for Access.
Access Services and Web database
ASP.NET web forms can query a Microsoft Access database, retrieve records and display them on the browser.
SharePoint Server 2010 via Access Services allows for Access 2010 databases to be published to SharePoint, thus enabling multiple users to interact with the database application from any standards-compliant Web browser. Access Web databases published to SharePoint Server can use standard objects such as tables, queries, forms, macros, and reports. Access Services stores those objects in SharePoint.
Access 2013 offers the ability to publish Access web solutions on SharePoint 2013. Rather than using SharePoint lists as its data source, Access 2013 uses an actual SQL Server database hosted by SharePoint or SQL Azure. This offers a true relational database with referential integrity, scalability, maintainability, and extensibility compared to the SharePoint views Access 2010 used. The macro language is enhanced to support more sophisticated programming logic and database level automation.
Import or link sources
Microsoft Access can also import or link directly to data stored in other applications and databases. Microsoft Office Access 2007 and newer can import from or link to:
Microsoft Access
Excel
SharePoint lists
Plain text
XML
Outlook
HTML
dBase (dropped in Access 2013; restored in Access 2016)
Paradox (with Access 2007; dropped in Access 2010)
Lotus 1-2-3 (dropped in Access 2010)
ODBC-compliant data containers, including:
Microsoft SQL Server
Oracle
MySQL
PostgreSQL
IBM Lotus Notes
IBM i DB2
Microsoft Access Runtime
Microsoft offers free runtime versions of Microsoft Access which allow users to run an Access desktop application without needing to purchase or install a retail version of Microsoft Access. This actually allows Access developers to create databases that can be freely distributed to an unlimited number of end-users. These runtime versions of Access 2007 and later can be downloaded for free from Microsoft. The runtime versions for Access 2003 and earlier were part of the Office Developer Extensions/Toolkit and required a separate purchase.
The runtime version allows users to view, edit and delete data, along with running queries, forms, reports, macros and VBA module code. The runtime version does not allow users to change the design of Microsoft Access tables, queries, forms, reports, macros or module code. The runtime versions are similar to their corresponding full version of Access and usually compatible with earlier versions; for example Access Runtime 2010 allows a user to run an Access application made with the 2010 version as well as 2007 through 2000. Due to deprecated features in Access 2013, its runtime version is also unable to support those older features. During development one can simulate the runtime environment from the fully functional version by using the /runtime command line option.
Development
Access stores all database tables, queries, forms, reports, macros, and modules in the Access Jet database as a single file.
For query development, Access offers a "Query Designer", a graphical user interface that allows users to build queries without knowledge of structured query language. In the Query Designer, users can "show" the datasources of the query (which can be tables or queries) and select the fields they want returned by clicking and dragging them into the grid. One can set up joins by clicking and dragging fields in tables to fields in other tables. Access allows users to view and manipulate the SQL code if desired. Any Access table, including linked tables from different data sources, can be used in a query.
Access also supports the creation of "pass-through queries". These snippets of SQL code can address external data sources through the use of ODBC connections on the local machine. This enables users to interact with data stored outside the Access program without using linked tables or Jet.
Users construct the pass-through queries using the SQL syntax supported by the external data source.
When developing reports (in "Design View") additions or changes to controls cause any linked queries to execute in the background and the designer is forced to wait for records to be returned before being able to make another change. This feature cannot be turned off.
Non-programmers can use the macro feature to automate simple tasks through a series of drop-down selections. Macros allow users to easily chain commands together such as running queries, importing or exporting data, opening and closing forms, previewing and printing reports, etc. Macros support basic logic (IF-conditions) and the ability to call other macros. Macros can also contain sub-macros which are similar to subroutines. In Access 2007, enhanced macros included error-handling and support for temporary variables. Access 2007 also introduced embedded macros that are essentially properties of an object's event. This eliminated the need to store macros as individual objects. However, macros were limited in their functionality by a lack of programming loops and advanced coding logic until Access 2013. With significant further enhancements introduced in Access 2013, the capabilities of macros became fully comparable to VBA. They made feature rich web-based application deployments practical, via a greatly enhanced Microsoft SharePoint interface and tools, as well as on traditional Windows desktops.
In common with other products in the Microsoft Office suite, the other programming language used in Access is Microsoft VBA. It is similar to Visual Basic 6.0 (VB6) and code can be stored in modules, classes, and code behind forms and reports. To create a richer, more efficient and maintainable finished product with good error handling, most professional Access applications are developed using the VBA programming language rather than macros, except where web deployment is a business requirement.
To manipulate data in tables and queries in VBA or macros, Microsoft provides two database access libraries of COM components:
Data Access Objects (DAO) (32-bit only), which is included in Access and Windows and evolved to ACE in Microsoft Access 2007 for the ACCDE database format
ActiveX Data Objects ActiveX Data Objects (ADO) (both 32-bit and 64-bit versions)
As well as DAO and ADO, developers can also use OLE DB and ODBC for developing native C/C++ programs for Access. For ADPs and the direct manipulation of SQL Server data, ADO is required. DAO is most appropriate for managing data in Access/Jet databases, and the only way to manipulate the complex field types in ACCDB tables.
In the database container or navigation pane in Access 2007 and later versions, the system automatically categorizes each object by type (e.g., table, query, macro). Many Access developers use the Leszynski naming convention, though this is not universal; it is a programming convention, not a DBMS-enforced rule. It is particularly helpful in VBA where references to object names may not indicate its data type (e.g. tbl for tables, qry for queries).
Developers deploy Microsoft Access most often for individual and workgroup projects (the Access 97 speed characterization was done for 32 users). Since Access 97, and with Access 2003 and 2007, Microsoft Access and hardware have evolved significantly. Databases under 1 GB in size (which can now fit entirely in RAM) and 200 simultaneous users are well within the capabilities of Microsoft Access. Of course, performance depends on the database design and tasks. Disk-intensive work such as complex searching and querying take the most time.
As data from a Microsoft Access database can be cached in RAM, processing speed may substantially improve when there is only a single user or if the data is not changing. In the past, the effect of packet latency on the record-locking system caused Access databases to run slowly on a virtual private network (VPN) or a wide area network (WAN) against a Jet database. broadband connections have mitigated this issue. Performance can also be enhanced if a continuous connection is maintained to the back-end database throughout the session rather than opening and closing it for each table access.
In July 2011, Microsoft acknowledged an intermittent query performance problem with all versions of Access and Windows 7 and Windows Server 2008 R2 due to the nature of resource management being vastly different in newer operating systems. This issue severely affects query performance on both Access 2003 and earlier with the Jet Database Engine code, as well as Access 2007 and later with the Access Database Engine (ACE). Microsoft has issued hotfixes KB2553029 for Access 2007 and KB2553116 for Access 2010, but will not fix the issue with Jet 4.0 as it is out of mainstream support.
In earlier versions of Microsoft Access, the ability to distribute applications required the purchase of the Developer Toolkit; in Access 2007, 2010 and Access 2013 the "Runtime Only" version is offered as a free download, making the distribution of royalty-free applications possible on Windows XP, Vista, 7 and Windows 8.x.
Split database architecture
Microsoft Access applications can adopt a split-database architecture. The single database can be divided into a separate "back-end" file that contains the data tables (shared on a file server) and a "front-end" (containing the application's objects such as queries, forms, reports, macros, and modules). The "front-end" Access application is distributed to each user's desktop and linked to the shared database. Using this approach, each user has a copy of Microsoft Access (or the runtime version) installed on their machine along with their application database. This reduces network traffic since the application is not retrieved for each use. The "front-end" database can still contain local tables for storing a user's settings or temporary data. This split-database design also allows development of the application independent of the data. One disadvantage is that users may make various changes to their own local copy of the application and this makes it hard to manage version control. When a new version is ready, the front-end database is replaced without impacting the data database. Microsoft Access has two built-in utilities, Database Splitter and Linked Table Manager, to facilitate this architecture.
Linked tables in Access use absolute paths rather than relative paths, so the development environment either has to have the same path as the production environment or a "dynamic-linker" routine can be written in VBA.
For very large Access databases, this may have performance issues and a SQL backend should be considered in these circumstances. This is less of an issue if the entire database can fit in the PC's RAM since Access caches data and indexes.
Migration to SQL Server
To scale Access applications to enterprise or web solutions, one possible technique involves migrating to Microsoft SQL Server or equivalent server database. A client–server design significantly reduces maintenance and increases security, availability, stability, and transaction logging.
Access 2000 through Access 2010 included a feature called the Upsizing Wizard that allowed users to upgrade their databases to Microsoft SQL Server, an ODBC client–server database. This feature was removed from Access 2013. An additional solution, the SQL Server Migration Assistant for Access (SSMA), continues to be available for free download from Microsoft.
A variety of upgrading options are available. After migrating the data and queries to SQL Server, the Access database can be linked to the SQL database. However, certain data types are problematic, most notably "Yes/No". In Microsoft Access there are three states for the Yes/No (True/False) data type: empty, no/false (zero) and yes/true (-1). The corresponding SQL Server data type is binary, with only two states, permissible values, zero and 1. Regardless, SQL Server is still the easiest migration. Retrieving data from linked tables is optimized to just the records needed, but this scenario may operate less efficiently than what would otherwise be optimal for SQL Server. For example, in instances where multi-table joins still require copying the whole table across the network.
In previous versions of Access, including Access 2010, databases can also be converted to Access Data Projects (ADP) which are tied directly to one SQL Server database. This feature was removed from Access 2013. ADP's support the ability to directly create and modify SQL Server objects such as tables, views, stored procedures, and SQL Server constraints. The views and stored procedures can significantly reduce the network traffic for multi-table joins. SQL Server supports temporary tables and links to other data sources beyond the single SQL Server database.
Finally, some Access databases are completely replaced by another technology such as ASP.NET or Java once the data is converted. However any migration may dictate major effort since the Access SQL language is a more powerful superset of standard SQL. Further, Access application procedures, whether VBA and macros, are written at a relatively higher level versus the currently available alternatives that are both robust and comprehensive. Note that the Access macro language, allowing an even higher level of abstraction than VBA, was significantly enhanced in Access 2010 and again in Access 2013.
In many cases, developers build direct web-to-data interfaces using ASP.NET, while keeping major business automation processes, administrative and reporting functions that don't need to be distributed to everyone in Access for information workers to maintain.
While all Access data can migrate to SQL Server directly, some queries cannot migrate successfully. In some situations, you may need to translate VBA functions and user defined functions into T–SQL or .NET functions / procedures. Crosstab queries can be migrated to SQL Server using the PIVOT command.
Protection
Microsoft Access applications can be made secure by various methods, the most basic being password access control; this is a relatively weak form of protection.
A higher level of protection is the use of workgroup security requiring a user name and password. Users and groups can be specified along with their rights at the object type or individual object level. This can be used to specify people with read-only or data entry rights but may be challenging to specify. A separate workgroup security file contains the settings which can be used to manage multiple databases. Workgroup security is not supported in the Access 2007 and Access 2010 ACCDB database format, although Access 2007 and Access 2010 still support it for MDB databases.
Databases can also be encrypted. The ACCDB format offers significantly advanced encryption from previous versions.
Additionally, if the database design needs to be secured to prevent changes, Access databases can be locked/protected (and the source code compiled) by converting the database to a .MDE file. All changes to the VBA project (modules, forms, or reports) need to be made to the original MDB and then reconverted to MDE. In Access 2007 and Access 2010, the ACCDB database is converted to an ACCDE file. Some tools are available for unlocking and "decompiling", although certain elements including original VBA comments and formatting are normally irretrievable.
File extensions
Microsoft Access saves information under the following file formats:
Versions
There are no Access versions between 2.0 and 7.0 because the Office 95 version was launched with Word 7. All of the Office 95 products have OLE 2 capabilities, and Access 7 shows that it was compatible with Word 7.
Version number 13 was skipped.
See also
Comparison of relational database management systems
Form (web)
MDB Tools
Kexi
LibreOffice Base
References
External links
Access Blog
Microsoft Access Version Releases, Service Packs, Hotfixes, and Updates History
1992 software
Data-centric programming languages
Desktop database application development tools
MacOS database-related software
Microsoft database software
Access
Programming languages created in 1992
Proprietary database management systems
Relational database management systems
Windows database-related software
Database administration tools |
21527 | https://en.wikipedia.org/wiki/Number%20theory | Number theory | Number theory (or arithmetic or higher arithmetic in older usage) is a branch of pure mathematics devoted primarily to the study of the integers and integer-valued functions. German mathematician Carl Friedrich Gauss (1777–1855) said, "Mathematics is the queen of the sciences—and number theory is the queen of mathematics." Number theorists study prime numbers as well as the properties of mathematical objects made out of integers (for example, rational numbers) or defined as generalizations of the integers (for example, algebraic integers).
Integers can be considered either in themselves or as solutions to equations (Diophantine geometry). Questions in number theory are often best understood through the study of analytical objects (for example, the Riemann zeta function) that encode properties of the integers, primes or other number-theoretic objects in some fashion (analytic number theory). One may also study real numbers in relation to rational numbers, for example, as approximated by the latter (Diophantine approximation).
The older term for number theory is arithmetic. By the early twentieth century, it had been superseded by "number theory". (The word "arithmetic" is used by the general public to mean "elementary calculations"; it has also acquired other meanings in mathematical logic, as in Peano arithmetic, and computer science, as in floating-point arithmetic.) The use of the term arithmetic for number theory regained some ground in the second half of the 20th century, arguably in part due to French influence. In particular, arithmetical is commonly preferred as an adjective to number-theoretic.
History
Origins
Dawn of arithmetic
The earliest historical find of an arithmetical nature is a fragment of a table: the broken clay tablet Plimpton 322 (Larsa, Mesopotamia, ca. 1800 BC) contains a list of "Pythagorean triples", that is, integers such that .
The triples are too many and too large to have been obtained by brute force. The heading over the first column reads: "The takiltum of the diagonal which has been subtracted such that the width..."
The table's layout suggests that it was constructed by means of what amounts, in modern language, to the identity
which is implicit in routine Old Babylonian exercises. If some other method was used, the triples were first constructed and then reordered by , presumably for actual use as a "table", for example, with a view to applications.
It is not known what these applications may have been, or whether there could have been any; Babylonian astronomy, for example, truly came into its own only later. It has been suggested instead that the table was a source of numerical examples for school problems.
While Babylonian number theory—or what survives of Babylonian mathematics that can be called thus—consists of this single, striking fragment, Babylonian algebra (in the secondary-school sense of "algebra") was exceptionally well developed. Late Neoplatonic sources state that Pythagoras learned mathematics from the Babylonians. Much earlier sources state that Thales and Pythagoras traveled and studied in Egypt.
Euclid IX 21–34 is very probably Pythagorean; it is very simple material ("odd times even is even", "if an odd number measures [= divides] an even number, then it also measures [= divides] half of it"), but it is all that is needed to prove that
is irrational. Pythagorean mystics gave great importance to the odd and the even.
The discovery that is irrational is credited to the early Pythagoreans (pre-Theodorus). By revealing (in modern terms) that numbers could be irrational, this discovery seems to have provoked the first foundational crisis in mathematical history; its proof or its divulgation are sometimes credited to Hippasus, who was expelled or split from the Pythagorean sect. This forced a distinction between numbers (integers and the rationals—the subjects of arithmetic), on the one hand, and lengths and proportions (which we would identify with real numbers, whether rational or not), on the other hand.
The Pythagorean tradition spoke also of so-called polygonal or figurate numbers. While square numbers, cubic numbers, etc., are seen now as more natural than triangular numbers, pentagonal numbers, etc., the study of the sums of triangular and pentagonal numbers would prove fruitful in the early modern period (17th to early 19th century).
We know of no clearly arithmetical material in ancient Egyptian or Vedic sources, though there is some algebra in each. The Chinese remainder theorem appears as an exercise in Sunzi Suanjing (3rd, 4th or 5th century CE). (There is one important step glossed over in Sunzi's solution: it is the problem that was later solved by Āryabhaṭa's Kuṭṭaka – see below.)
There is also some numerical mysticism in Chinese mathematics, but, unlike that of the Pythagoreans, it seems to have led nowhere. Like the Pythagoreans' perfect numbers, magic squares have passed from superstition into recreation.
Classical Greece and the early Hellenistic period
Aside from a few fragments, the mathematics of Classical Greece is known to us either through the reports of contemporary non-mathematicians or through mathematical works from the early Hellenistic period. In the case of number theory, this means, by and large, Plato and Euclid, respectively.
While Asian mathematics influenced Greek and Hellenistic learning, it seems to be the case that Greek mathematics is also an indigenous tradition.
Eusebius, PE X, chapter 4 mentions of Pythagoras:
"In fact the said Pythagoras, while busily studying the wisdom of each nation, visited Babylon, and Egypt, and all Persia, being instructed by the Magi and the priests: and in addition to these he is related to have studied under the Brahmans (these are Indian philosophers); and from some he gathered astrology, from others geometry, and arithmetic and music from others, and different things from different nations, and only from the wise men of Greece did he get nothing, wedded as they were to a poverty and dearth of wisdom: so on the contrary he himself became the author of instruction to the Greeks in the learning which he had procured from abroad."
Aristotle claimed that the philosophy of Plato closely followed the teachings of the Pythagoreans, and Cicero repeats this claim: Platonem ferunt didicisse Pythagorea omnia ("They say Plato learned all things Pythagorean").
Plato had a keen interest in mathematics, and distinguished clearly between arithmetic and calculation. (By arithmetic he meant, in part, theorising on number, rather than what arithmetic or number theory have come to mean.) It is through one of Plato's dialogues—namely, Theaetetus—that we know that Theodorus had proven that are irrational. Theaetetus was, like Plato, a disciple of Theodorus's; he worked on distinguishing different kinds of incommensurables, and was thus arguably a pioneer in the study of number systems. (Book X of Euclid's Elements is described by Pappus as being largely based on Theaetetus's work.)
Euclid devoted part of his Elements to prime numbers and divisibility, topics that belong unambiguously to number theory and are basic to it (Books VII to IX of Euclid's Elements). In particular, he gave an algorithm for computing the greatest common divisor of two numbers (the Euclidean algorithm; Elements, Prop. VII.2) and the first known proof of the infinitude of primes (Elements, Prop. IX.20).
In 1773, Lessing published an epigram he had found in a manuscript during his work as a librarian; it claimed to be a letter sent by Archimedes to Eratosthenes. The epigram proposed what has become known as
Archimedes's cattle problem; its solution (absent from the manuscript) requires solving an indeterminate quadratic equation (which reduces to what would later be misnamed Pell's equation). As far as we know, such equations were first successfully treated by the Indian school. It is not known whether Archimedes himself had a method of solution.
Diophantus
Very little is known about Diophantus of Alexandria; he probably lived in the third century AD, that is, about five hundred years after Euclid. Six out of the thirteen books of Diophantus's Arithmetica survive in the original Greek and four more survive in an Arabic translation. The Arithmetica is a collection of worked-out problems where the task is invariably to find rational solutions to a system of polynomial equations, usually of the form or . Thus, nowadays, we speak of Diophantine equations when we speak of polynomial equations to which rational or integer solutions must be found.
One may say that Diophantus was studying rational points, that is, points whose coordinates are rational—on curves and algebraic varieties; however, unlike the Greeks of the Classical period, who did what we would now call basic algebra in geometrical terms, Diophantus did what we would now call basic algebraic geometry in purely algebraic terms. In modern language, what Diophantus did was to find rational parametrizations of varieties; that is, given an equation of the form (say)
, his aim was to find (in essence) three rational functions such that, for all values of and , setting
for gives a solution to
Diophantus also studied the equations of some non-rational curves, for which no rational parametrisation is possible. He managed to find some rational points on these curves (elliptic curves, as it happens, in what seems to be their first known occurrence) by means of what amounts to a tangent construction: translated into coordinate geometry
(which did not exist in Diophantus's time), his method would be visualised as drawing a tangent to a curve at a known rational point, and then finding the other point of intersection of the tangent with the curve; that other point is a new rational point. (Diophantus also resorted to what could be called a special case of a secant construction.)
While Diophantus was concerned largely with rational solutions, he assumed some results on integer numbers, in particular that every integer is the sum of four squares (though he never stated as much explicitly).
Āryabhaṭa, Brahmagupta, Bhāskara
While Greek astronomy probably influenced Indian learning, to the point of introducing trigonometry, it seems to be the case that Indian mathematics is otherwise an indigenous tradition; in particular, there is no evidence that Euclid's Elements reached India before the 18th century.
Āryabhaṭa (476–550 AD) showed that pairs of simultaneous congruences , could be solved by a method he called kuṭṭaka, or pulveriser; this is a procedure close to (a generalisation of) the Euclidean algorithm, which was probably discovered independently in India. Āryabhaṭa seems to have had in mind applications to astronomical calculations.
Brahmagupta (628 AD) started the systematic study of indefinite quadratic equations—in particular, the misnamed Pell equation, in which Archimedes may have first been interested, and which did not start to be solved in the West until the time of Fermat and Euler. Later Sanskrit authors would follow, using Brahmagupta's technical terminology. A general procedure (the chakravala, or "cyclic method") for solving Pell's equation was finally found by Jayadeva (cited in the eleventh century; his work is otherwise lost); the earliest surviving exposition appears in Bhāskara II's Bīja-gaṇita (twelfth century).
Indian mathematics remained largely unknown in Europe until the late eighteenth century; Brahmagupta and Bhāskara's work was translated into English in 1817 by Henry Colebrooke.
Arithmetic in the Islamic golden age
In the early ninth century, the caliph Al-Ma'mun ordered translations of many Greek mathematical works and at least one Sanskrit work (the Sindhind,
which may or may not be Brahmagupta's Brāhmasphuṭasiddhānta).
Diophantus's main work, the Arithmetica, was translated into Arabic by Qusta ibn Luqa (820–912).
Part of the treatise al-Fakhri (by al-Karajī, 953 – ca. 1029) builds on it to some extent. According to Rashed Roshdi, Al-Karajī's contemporary Ibn al-Haytham knew what would later be called Wilson's theorem.
Western Europe in the Middle Ages
Other than a treatise on squares in arithmetic progression by Fibonacci—who traveled and studied in north Africa and Constantinople—no number theory to speak of was done in western Europe during the Middle Ages. Matters started to change in Europe in the late Renaissance, thanks to a renewed study of the works of Greek antiquity. A catalyst was the textual emendation and translation into Latin of Diophantus' Arithmetica.
Early modern number theory
Fermat
Pierre de Fermat (1607–1665) never published his writings; in particular, his work on number theory is contained almost entirely in letters to mathematicians and in private marginal notes. In his notes and letters, he scarcely wrote any proofs - he had no models in the area.
Over his lifetime, Fermat made the following contributions to the field:
One of Fermat's first interests was perfect numbers (which appear in Euclid, Elements IX) and amicable numbers; these topics led him to work on integer divisors, which were from the beginning among the subjects of the correspondence (1636 onwards) that put him in touch with the mathematical community of the day.
In 1638, Fermat claimed, without proof, that all whole numbers can be expressed as the sum of four squares or fewer.
Fermat's little theorem (1640): if a is not divisible by a prime p, then
If a and b are coprime, then is not divisible by any prime congruent to −1 modulo 4; and every prime congruent to 1 modulo 4 can be written in the form . These two statements also date from 1640; in 1659, Fermat stated to Huygens that he had proven the latter statement by the method of infinite descent.
In 1657, Fermat posed the problem of solving as a challenge to English mathematicians. The problem was solved in a few months by Wallis and Brouncker. Fermat considered their solution valid, but pointed out they had provided an algorithm without a proof (as had Jayadeva and Bhaskara, though Fermat wasn't aware of this). He stated that a proof could be found by infinite descent.
Fermat stated and proved (by infinite descent) in the appendix to Observations on Diophantus (Obs. XLV) that has no non-trivial solutions in the integers. Fermat also mentioned to his correspondents that has no non-trivial solutions, and that this could also be proven by infinite descent. The first known proof is due to Euler (1753; indeed by infinite descent).
Fermat claimed (Fermat's Last Theorem) to have shown there are no solutions to for all ; this claim appears in his annotations in the margins of his copy of Diophantus.
Euler
The interest of Leonhard Euler (1707–1783) in number theory was first spurred in 1729, when a friend of his, the amateur Goldbach, pointed him towards some of Fermat's work on the subject. This has been called the "rebirth" of modern number theory, after Fermat's relative lack of success in getting his contemporaries' attention for the subject. Euler's work on number theory includes the following:
Proofs for Fermat's statements. This includes Fermat's little theorem (generalised by Euler to non-prime moduli); the fact that if and only if ; initial work towards a proof that every integer is the sum of four squares (the first complete proof is by Joseph-Louis Lagrange (1770), soon improved by Euler himself); the lack of non-zero integer solutions to (implying the case n=4 of Fermat's last theorem, the case n=3 of which Euler also proved by a related method).
Pell's equation, first misnamed by Euler. He wrote on the link between continued fractions and Pell's equation.
First steps towards analytic number theory. In his work of sums of four squares, partitions, pentagonal numbers, and the distribution of prime numbers, Euler pioneered the use of what can be seen as analysis (in particular, infinite series) in number theory. Since he lived before the development of complex analysis, most of his work is restricted to the formal manipulation of power series. He did, however, do some very notable (though not fully rigorous) early work on what would later be called the Riemann zeta function.
Quadratic forms. Following Fermat's lead, Euler did further research on the question of which primes can be expressed in the form , some of it prefiguring quadratic reciprocity.
Diophantine equations. Euler worked on some Diophantine equations of genus 0 and 1. In particular, he studied Diophantus's work; he tried to systematise it, but the time was not yet ripe for such an endeavour—algebraic geometry was still in its infancy. He did notice there was a connection between Diophantine problems and elliptic integrals, whose study he had himself initiated.
Lagrange, Legendre, and Gauss
Joseph-Louis Lagrange (1736–1813) was the first to give full proofs of some of Fermat's and Euler's work and observations—for instance, the four-square theorem and the basic theory of the misnamed "Pell's equation" (for which an algorithmic solution was found by Fermat and his contemporaries, and also by Jayadeva and Bhaskara II before them.) He also studied quadratic forms in full generality (as opposed to )—defining their equivalence relation, showing how to put them in reduced form, etc.
Adrien-Marie Legendre (1752–1833) was the first to state the law of quadratic reciprocity. He also
conjectured what amounts to the prime number theorem and Dirichlet's theorem on arithmetic progressions. He gave a full treatment of the equation and worked on quadratic forms along the lines later developed fully by Gauss. In his old age, he was the first to prove Fermat's Last Theorem for (completing work by Peter Gustav Lejeune Dirichlet, and crediting both him and Sophie Germain).
In his Disquisitiones Arithmeticae (1798), Carl Friedrich Gauss (1777–1855) proved the law of quadratic reciprocity and developed the theory of quadratic forms (in particular, defining their composition). He also introduced some basic notation (congruences) and devoted a section to computational matters, including primality tests. The last section of the Disquisitiones established a link between roots of unity and number theory:
The theory of the division of the circle...which is treated in sec. 7 does not belong
by itself to arithmetic, but its principles can only be drawn from higher arithmetic.
In this way, Gauss arguably made a first foray towards both Évariste Galois's work and algebraic number theory.
Maturity and division into subfields
Starting early in the nineteenth century, the following developments gradually took place:
The rise to self-consciousness of number theory (or higher arithmetic) as a field of study.
The development of much of modern mathematics necessary for basic modern number theory: complex analysis, group theory, Galois theory—accompanied by greater rigor in analysis and abstraction in algebra.
The rough subdivision of number theory into its modern subfields—in particular, analytic and algebraic number theory.
Algebraic number theory may be said to start with the study of reciprocity and cyclotomy, but truly came into its own with the development of abstract algebra and early ideal theory and valuation theory; see below. A conventional starting point for analytic number theory is Dirichlet's theorem on arithmetic progressions (1837), whose proof introduced L-functions and involved some asymptotic analysis and a limiting process on a real variable. The first use of analytic ideas in number theory actually
goes back to Euler (1730s), who used formal power series and non-rigorous (or implicit) limiting arguments. The use of complex analysis in number theory comes later: the work of Bernhard Riemann (1859) on the zeta function is the canonical starting point; Jacobi's four-square theorem (1839), which predates it, belongs to an initially different strand that has by now taken a leading role in analytic number theory (modular forms).
The history of each subfield is briefly addressed in its own section below; see the main article of each subfield for fuller treatments. Many of the most interesting questions in each area remain open and are being actively worked on.
Main subdivisions
Elementary number theory
The term elementary generally denotes a method that does not use complex analysis. For example, the prime number theorem was first proven using complex analysis in 1896, but an elementary proof was found only in 1949 by Erdős and Selberg. The term is somewhat ambiguous: for example, proofs based on complex Tauberian theorems (for example, Wiener–Ikehara) are often seen as quite enlightening but not elementary, in spite of using Fourier analysis, rather than complex analysis as such. Here as elsewhere, an elementary proof may be longer and more difficult for most readers than a non-elementary one.
Number theory has the reputation of being a field many of whose results can be stated to the layperson. At the same time, the proofs of these results are not particularly accessible, in part because the range of tools they use is, if anything, unusually broad within mathematics.
Analytic number theory
Analytic number theory may be defined
in terms of its tools, as the study of the integers by means of tools from real and complex analysis; or
in terms of its concerns, as the study within number theory of estimates on size and density, as opposed to identities.
Some subjects generally considered to be part of analytic number theory, for example, sieve theory, are better covered by the second rather than the first definition: some of sieve theory, for instance, uses little analysis, yet it does belong to analytic number theory.
The following are examples of problems in analytic number theory: the prime number theorem, the Goldbach conjecture (or the twin prime conjecture, or the Hardy–Littlewood conjectures), the Waring problem and the Riemann hypothesis. Some of the most important tools of analytic number theory are the circle method, sieve methods and L-functions (or, rather, the study of their properties). The theory of modular forms (and, more generally, automorphic forms) also occupies an increasingly central place in the toolbox of analytic number theory.
One may ask analytic questions about algebraic numbers, and use analytic means to answer such questions; it is thus that algebraic and analytic number theory intersect. For example, one may define prime ideals (generalizations of prime numbers in the field of algebraic numbers) and ask how many prime ideals there are up to a certain size. This question can be answered by means of an examination of Dedekind zeta functions, which are generalizations of the Riemann zeta function, a key analytic object at the roots of the subject. This is an example of a general procedure in analytic number theory: deriving information about the distribution of a sequence (here, prime ideals or prime numbers) from the analytic behavior of an appropriately constructed complex-valued function.
Algebraic number theory
An algebraic number is any complex number that is a solution to some polynomial equation with rational coefficients; for example, every solution of (say) is an algebraic number. Fields of algebraic numbers are also called algebraic number fields, or shortly number fields. Algebraic number theory studies algebraic number fields. Thus, analytic and algebraic number theory can and do overlap: the former is defined by its methods, the latter by its objects of study.
It could be argued that the simplest kind of number fields (viz., quadratic fields) were already studied by Gauss, as the discussion of quadratic forms in Disquisitiones arithmeticae can be restated in terms of ideals and
norms in quadratic fields. (A quadratic field consists of all
numbers of the form , where
and are rational numbers and
is a fixed rational number whose square root is not rational.)
For that matter, the 11th-century chakravala method amounts—in modern terms—to an algorithm for finding the units of a real quadratic number field. However, neither Bhāskara nor Gauss knew of number fields as such.
The grounds of the subject as we know it were set in the late nineteenth century, when ideal numbers, the theory of ideals and valuation theory were developed; these are three complementary ways of dealing with the lack of unique factorisation in algebraic number fields. (For example, in the field generated by the rationals
and , the number can be factorised both as and
; all of , , and
are irreducible, and thus, in a naïve sense, analogous to primes among the integers.) The initial impetus for the development of ideal numbers (by Kummer) seems to have come from the study of higher reciprocity laws, that is, generalisations of quadratic reciprocity.
Number fields are often studied as extensions of smaller number fields: a field L is said to be an extension of a field K if L contains K.
(For example, the complex numbers C are an extension of the reals R, and the reals R are an extension of the rationals Q.)
Classifying the possible extensions of a given number field is a difficult and partially open problem. Abelian extensions—that is, extensions L of K such that the Galois group Gal(L/K) of L over K is an abelian group—are relatively well understood.
Their classification was the object of the programme of class field theory, which was initiated in the late 19th century (partly by Kronecker and Eisenstein) and carried out largely in 1900–1950.
An example of an active area of research in algebraic number theory is Iwasawa theory. The Langlands program, one of the main current large-scale research plans in mathematics, is sometimes described as an attempt to generalise class field theory to non-abelian extensions of number fields.
Diophantine geometry
The central problem of Diophantine geometry is to determine when a Diophantine equation has solutions, and if it does, how many. The approach taken is to think of the solutions of an equation as a geometric object.
For example, an equation in two variables defines a curve in the plane. More generally, an equation, or system of equations, in two or more variables defines a curve, a surface or some other such object in n-dimensional space. In Diophantine geometry, one asks whether there are any rational points (points all of whose coordinates are rationals) or
integral points (points all of whose coordinates are integers) on the curve or surface. If there are any such points, the next step is to ask how many there are and how they are distributed. A basic question in this direction is if there are finitely
or infinitely many rational points on a given curve (or surface).
In the Pythagorean equation
we would like to study its rational solutions, that is, its solutions
such that
x and y are both rational. This is the same as asking for all integer solutions
to ; any solution to the latter equation gives
us a solution , to the former. It is also the
same as asking for all points with rational coordinates on the curve
described by . (This curve happens to be a circle of radius 1 around the origin.)
The rephrasing of questions on equations in terms of points on curves turns out to be felicitous. The finiteness or not of the number of rational or integer points on an algebraic curve—that is, rational or integer solutions to an equation , where is a polynomial in two variables—turns out to depend crucially on the genus of the curve. The genus can be defined as follows: allow the variables in to be complex numbers; then defines a 2-dimensional surface in (projective) 4-dimensional space (since two complex variables can be decomposed into four real variables, that is, four dimensions). If we count the number of (doughnut) holes in the surface; we call this number the genus of . Other geometrical notions turn out to be just as crucial.
There is also the closely linked area of Diophantine approximations: given a number , then finding how well can it be approximated by rationals. (We are looking for approximations that are good relative to the amount of space that it takes to write the rational: call (with ) a good approximation to if , where is large.) This question is of special interest if is an algebraic number. If cannot be well approximated, then some equations do not have integer or rational solutions. Moreover, several concepts (especially that of height) turn out to be critical both in Diophantine geometry and in the study of Diophantine approximations. This question is also of special interest in transcendental number theory: if a number can be better approximated than any algebraic number, then it is a transcendental number. It is by this argument that and e have been shown to be transcendental.
Diophantine geometry should not be confused with the geometry of numbers, which is a collection of graphical methods for answering certain questions in algebraic number theory. Arithmetic geometry, however, is a contemporary term
for much the same domain as that covered by the term Diophantine geometry. The term arithmetic geometry is arguably used
most often when one wishes to emphasise the connections to modern algebraic geometry (as in, for instance, Faltings's theorem) rather than to techniques in Diophantine approximations.
Other subfields
The areas below date from no earlier than the mid-twentieth century, even if they are based on older material. For example, as is explained below, the matter of algorithms in number theory is very old, in some sense older than the concept of proof; at the same time, the modern study of computability dates only from the 1930s and 1940s, and computational complexity theory from the 1970s.
Probabilistic number theory
Much of probabilistic number theory can be seen as an important special case of the study of variables that are almost, but not quite, mutually independent. For example, the event that a random integer between one and a million be divisible by two and the event that it be divisible by three are almost independent, but not quite.
It is sometimes said that probabilistic combinatorics uses the fact that whatever happens with probability greater than must happen sometimes; one may say with equal justice that many applications of probabilistic number theory hinge on the fact that whatever is unusual must be rare. If certain algebraic objects (say, rational or integer solutions to certain equations) can be shown to be in the tail of certain sensibly defined distributions, it follows that there must be few of them; this is a very concrete non-probabilistic statement following from a probabilistic one.
At times, a non-rigorous, probabilistic approach leads to a number of heuristic algorithms and open problems, notably Cramér's conjecture.
Arithmetic combinatorics
If we begin from a fairly "thick" infinite set , does it contain many elements in arithmetic progression: ,
, say? Should it be possible to write large integers as sums of elements of ?
These questions are characteristic of arithmetic combinatorics. This is a presently coalescing field; it subsumes additive number theory (which concerns itself with certain very specific sets of arithmetic significance, such as the primes or the squares) and, arguably, some of the geometry of numbers,
together with some rapidly developing new material. Its focus on issues of growth and distribution accounts in part for its developing links with ergodic theory, finite group theory, model theory, and other fields. The term additive combinatorics is also used; however, the sets being studied need not be sets of integers, but rather subsets of non-commutative groups, for which the multiplication symbol, not the addition symbol, is traditionally used; they can also be subsets of rings, in which case the growth of and · may be
compared.
Computational number theory
While the word algorithm goes back only to certain readers of al-Khwārizmī, careful descriptions of methods of solution are older than proofs: such methods (that is, algorithms) are as old as any recognisable mathematics—ancient Egyptian, Babylonian, Vedic, Chinese—whereas proofs appeared only with the Greeks of the classical period.
An early case is that of what we now call the Euclidean algorithm. In its basic form (namely, as an algorithm for computing the greatest common divisor) it appears as Proposition 2 of Book VII in Elements, together with a proof of correctness. However, in the form that is often used in number theory (namely, as an algorithm for finding integer solutions to an equation ,
or, what is the same, for finding the quantities whose existence is assured by the Chinese remainder theorem) it first appears in the works of Āryabhaṭa (5th–6th century CE) as an algorithm called
kuṭṭaka ("pulveriser"), without a proof of correctness.
There are two main questions: "Can we compute this?" and "Can we compute it rapidly?" Anyone can test whether a number is prime or, if it is not, split it into prime factors; doing so rapidly is another matter. We now know fast algorithms for testing primality, but, in spite of much work (both theoretical and practical), no truly fast algorithm for factoring.
The difficulty of a computation can be useful: modern protocols for encrypting messages (for example, RSA) depend on functions that are known to all, but whose inverses are known only to a chosen few, and would take one too long a time to figure out on one's own. For example, these functions can be such that their inverses can be computed only if certain large integers are factorized. While many difficult computational problems outside number theory are known, most working encryption protocols nowadays are based on the difficulty of a few number-theoretical problems.
Some things may not be computable at all; in fact, this can be proven in some instances. For instance, in 1970, it was proven, as a solution to Hilbert's 10th problem, that there is no Turing machine which can solve all Diophantine equations. In particular, this means that, given a computably enumerable set of axioms, there are Diophantine equations for which there is no proof, starting from the axioms, of whether the set of equations has or does not have integer solutions. (We would necessarily be speaking of Diophantine equations for which there are no integer solutions, since, given a Diophantine equation with at least one solution, the solution itself provides a proof of the fact that a solution exists. We cannot prove that a particular Diophantine equation is of this kind, since this would imply that it has no solutions.)
Applications
The number-theorist Leonard Dickson (1874–1954) said "Thank God that number theory is unsullied by any application". Such a view is no longer applicable to number theory. In 1974, Donald Knuth said "...virtually every theorem in elementary number theory arises in a natural, motivated way in connection with the problem of making computers do high-speed numerical calculations".
Elementary number theory is taught in discrete mathematics courses for computer scientists; on the other hand, number theory also has applications to the continuous in numerical analysis. As well as the well-known applications to cryptography, there are also applications to many other areas of mathematics.
Prizes
The American Mathematical Society awards the Cole Prize in Number Theory. Moreover, number theory is one of the three mathematical subdisciplines rewarded by the Fermat Prize.
See also
Algebraic function field
Finite field
p-adic number
Notes
References
Sources
(Subscription needed)
1968 edition at archive.org
Volume 1 Volume 2 Volume 3 Volume 4 (1912)
For other editions, see Iamblichus#List of editions and translations
This Google books preview of Elements of algebra lacks Truesdell's intro, which is reprinted (slightly abridged) in the following book:
Further reading
Two of the most popular introductions to the subject are:
Hardy and Wright's book is a comprehensive classic, though its clarity sometimes suffers due to the authors' insistence on elementary methods (Apostol n.d.).
Vinogradov's main attraction consists in its set of problems, which quickly lead to Vinogradov's own research interests; the text itself is very basic and close to minimal. Other popular first introductions are:
Popular choices for a second textbook include:
External links
Number Theory entry in the Encyclopedia of Mathematics
Number Theory Web |
21861 | https://en.wikipedia.org/wiki/Cryptonomicon | Cryptonomicon | Cryptonomicon is a 1999 novel by American author Neal Stephenson, set in two different time periods. One group of characters are World War II-era Allied codebreakers and tactical-deception operatives affiliated with the Government Code and Cypher School at Bletchley Park (UK), and disillusioned Axis military and intelligence figures. The second narrative is set in the late 1990s, with characters that are (in part) descendants of those of the earlier time period, who employ cryptologic, telecom, and computer technology to build an underground data haven in the fictional Sultanate of Kinakuta. Their goal is to facilitate anonymous Internet banking using electronic money and (later) digital gold currency, with a long-term objective to distribute Holocaust Education and Avoidance Pod (HEAP) media for instructing genocide-target populations on defensive warfare.
Genre and subject matter
Cryptonomicon is closer to the genres of historical fiction and contemporary techno-thriller than to the science fiction of Stephenson's two previous novels, Snow Crash and The Diamond Age. It features fictionalized characterizations of such historical figures as Alan Turing, Albert Einstein, Douglas MacArthur, Winston Churchill, Isoroku Yamamoto, Karl Dönitz, Hermann Göring, and Ronald Reagan, as well as some highly technical and detailed descriptions of modern cryptography and information security, with discussions of prime numbers, modular arithmetic, and Van Eck phreaking.
Title
According to Stephenson, the title is a play on Necronomicon, the title of a book mentioned in the stories of horror writer H. P. Lovecraft:
The novel's Cryptonomicon, described as a "cryptographer's bible", is a fictional book summarizing America's knowledge of cryptography and cryptanalysis. Begun by John Wilkins (the Cryptonomicon is mentioned in Quicksilver) and amended over time by William Friedman, Lawrence Waterhouse, and others, the Cryptonomicon is described by Katherine Hayles as "a kind of Kabala created by a Brotherhood of Code that stretches across centuries. To know its contents is to qualify as a Morlock among the Eloi, and the elite among the elite are those gifted enough actually to contribute to it."
Plot
The action takes place in two periods—World War II and the late 1990s, during the Internet boom and the Asian financial crisis.
In 1942, Lawrence Pritchard Waterhouse, a young United States Navy code breaker and mathematical genius, is assigned to the newly formed joint British and American Detachment 2702. This ultra-secret unit's role is to hide the fact that Allied intelligence has cracked the German Enigma code. The detachment stages events, often behind enemy lines, that provide alternative explanations for the Allied intelligence successes. United States Marine sergeant Bobby Shaftoe, a veteran of China and Guadalcanal, serves in unit 2702, carrying out Waterhouse's plans. At the same time, Japanese soldiers, including mining engineer Goto Dengo, a "friendly enemy" of Shaftoe's, are assigned to build a mysterious bunker in the mountains in the Philippines as part of what turns out to be a literal suicide mission.
Circa 1997, Randy Waterhouse (Lawrence's grandson) joins his old role-playing game companion Avi Halaby in a new startup, providing Pinoy-grams (inexpensive, non-real-time video messages) to migrant Filipinos via new fiber-optic cables. The Epiphyte Corporation uses this income stream to fund the creation of a data haven in the nearby fictional Sultanate of Kinakuta. Vietnam veteran Doug Shaftoe, the son of Bobby Shaftoe, and his daughter Amy do the undersea surveying for the cables and engineering work on the haven, which is overseen by Goto Furudenendu, heir-apparent to Goto Engineering. Complications arise as figures from the past reappear seeking gold or revenge.
Characters
World War II storyline
Fictional characters
Sgt. Robert "Bobby" Shaftoe, a gung-ho, haiku-writing United States Marine Raider.
Lawrence Pritchard Waterhouse, an American cryptographer/mathematician serving as an officer in the United States Navy, although he is known to wear an Army uniform if the situation calls for it.
Günter Bischoff, a Kapitänleutnant in the Kriegsmarine, who commands a U-boat for much of the story, and later takes command of a new, advanced submarine fueled with hydrogen peroxide.
Rudolf "Rudy" von Hacklheber, a non-Nazi German mathematician and cryptographer, who spent time attending Princeton University, where he had a romantic relationship with Alan Turing and befriended Waterhouse. He seems to know more about the mysterious Societas Eruditorum than any non-member.
Earl Comstock, a former Electrical Till Corp. executive and US Army officer, who eventually founds the NSA and becomes a key policy maker for US involvement in the Second Indochina War.
Julieta Kivistik, a Finnish woman who assists some of the World War II characters when they find themselves stranded in Sweden, and who later gives birth to a baby boy (Günter Enoch Bobby Kivistik) whose paternity is uncertain.
“Uncle” Otto Kivistik, Julieta's uncle, who runs a successful smuggling ring between neutral Sweden, Finland, and the USSR during World War II.
Mary cCmndhd (pronounced "Skuhmithid" and anglicized as "Smith"), a member of a Qwghlmian immigrant community living in Australia, who catches the attention of Lawrence Waterhouse while he is stationed in Brisbane.
Glory Altamira, a nursing student and Bobby Shaftoe's Filipina lover. She becomes a member of the Philippine resistance movement during the Japanese occupation. Mother of Douglas MacArthur Shaftoe.
Historical figures
Fictionalized versions of several historical figures appear in the World War II storyline:
Alan Turing, the cryptographer and computer scientist, is a colleague and friend of Lawrence Waterhouse and sometime lover of Rudy von Hacklheber.
Douglas MacArthur, the famed U.S. Army general, who takes a central role toward the end of the World War II timeline.
Karl Dönitz, Großadmiral of the Kriegsmarine, is never actually seen as a character but issues orders to his U-boats, including the one captained by Bischoff. Bischoff threatens to reveal information about hidden war gold unless Dönitz rescinds an order to sink his submarine.
Hermann Göring, who appears extensively in the recollections of Rudy von Hacklheber as Rudy recounts how Göring tried recruiting him as a cryptographer for the Nazis: Rudy delivers an intentionally weakened system, reserving the full system for the use of the conspiracy among the characters to locate hidden gold.
Future United States President Ronald Reagan is depicted during his wartime service as an officer in the U.S. Army Air Corps Public Relations branch's 1st Motion Picture Unit. He attempts to film an interview with the recuperating and morphine-addled Bobby Shaftoe, who spoils the production with his account of a giant lizard attack and his harsh criticism of General MacArthur.
Admiral Isoroku Yamamoto's 1943 death at the hands of U.S. Army fighter aircraft during Operation Vengeance over Bougainville Island fills an entire chapter. During his fateful flight, the Commander-in-Chief of the Japanese Imperial Navy's Combined Fleet reflects upon the failures and hubris of his Imperial Army counterparts, who persistently underestimate the cunning and ferocity of their Allied opponents in the Pacific Theatre of Operations. As his damaged transport plane completes its terminal descent, Yamamoto realizes that all of the Japanese military codes have been broken, which explains why he is "on fire and hurtling through the jungle at a hundred miles per hour in a chair, closely pursued by tons of flaming junk."
Albert Einstein brushes off a young Lawrence Waterhouse's request for advice. During his year of undergraduate study at Princeton, Waterhouse periodically wanders the halls of the Institute for Advanced Study, randomly asking mathematicians (whose names he never remembers) for advice on how to make intricate calculations for his "sprocket question," which is how he eventually meets Turing.
Harvest, an early supercomputer built by IBM (known as "ETC" or "Electrical Till Corp." in the novel) for the National Security Agency for cryptanalysis. The fictionalized Harvest became operational in the early 1950s, under the supervision of Earl Comstock, while the actual system was installed in 1962.
1990s storyline
The precise date of this storyline is not established, but the ages of characters, the technologies described, and certain date-specific references suggest that it is set in the late 1990s, at the time of the internet boom and the Asian financial crisis.
Randall "Randy" Lawrence Waterhouse, eldest grandson of Lawrence and Mary Waterhouse (née cCmndhd) and an expert systems and network administrator with the Epiphyte(2) corporation. He is mentioned in Stephenson's 2019 novel Fall, in which he has amassed a fortune that led to the creation of a charitable Foundation bearing his name.
Avi "Avid" Halaby, Randy's business partner in Epiphyte(2), of which he is the CEO. He is descended on his mother's side from New Mexican Crypto-Jews, which detail, while seemingly included as a pun, is explored further in The Baroque Cycle. Avi is obsessed with using technology to prevent future genocides, namely by creating a handbook of basic technology and defense practices. His nickname Avid comes from his love of role playing games.
America "Amy" Shaftoe, Doug Shaftoe's daughter (and Bobby Shaftoe's granddaughter) who has moved from the U.S. to live with Doug in the Philippines, and who becomes Randy's love interest.
Dr. Hubert Kepler, a.k.a. "The Dentist," predatory billionaire investment fund manager, Randy and Avi's business rival.
Eberhard Föhr, a member of Epiphyte(2) and an expert in biometrics.
John Cantrell, a member of Epiphyte(2), a libertarian who is an expert in cryptography and who wrote the fictional cryptography program Ordo.
Tom Howard, a member of Epiphyte(2), a libertarian and firearms enthusiast who is an expert in large computer installations.
Beryl Hagen, chief financial officer of Epiphyte(2) and veteran of a dozen startups.
Charlene, a liberal arts academic and Randy's girlfriend at the beginning of the novel, who later moves to New Haven, Connecticut, to live and work with Dr. G.E.B. (Günter Enoch Bobby) Kivistik.
Andrew Loeb, a former friend and now Randy's enemy, a survivalist and neo-Luddite whose lawsuits destroyed Randy and Avi's first start-up, and who at the time of the novel works as a lawyer for Hubert Kepler. He is referred to by Randy as "Gollum," comparing him to that character in the novels of J. R. R. Tolkien.
Both storylines
Goto Dengo, a lieutenant in the Imperial Japanese Army and a mining engineer involved in an Axis project to bury looted gold in the Philippines. In the present-day storyline, he is a semi-retired chief executive of a large Japanese construction company, Goto Engineering.
Enoch Root, a mysterious, seemingly ageless former Catholic priest and physician, serving as a coast-watcher with the ANZACs during World War II, later a chaplain in the top-secret British-American "Unit 2702," and an important figure in the equally mysterious Societas Eruditorum. He first appears on a Guadalcanal beach to save a badly injured Bobby Shaftoe. Hints about his longevity emerge when Root is critically injured in Norrsbruk, Sweden, and is wed to Julieta Kivistik on his "death bed" so that she and her unborn child can obtain British citizenship. Root is officially pronounced dead, but is slipped away, rapidly recovering after a mysterious therapeutic agent is obtained from his antique cigar box. He turns up in Manila later in 1944 and goes on to spend part of the 1950s with the National Security Agency and, by the 1990s, has been based mostly in the Philippines as a Catholic lay-worker while "gadding about trying to bring Internet stuff to China." Root also appears in Stephenson's The Baroque Cycle, which is set between 1655 and 1714, and in his 2019 novel Fall; or, Dodge in Hell, including a chapter set in late 21st-century Seattle.
Mr. Wing, a wartime northern Chinese slave of the Japanese in the Philippines, who went on to become a general in the Chinese army and later a senior official in the State Grid Corporation of China. Described by Enoch Root as a "wily survivor of many purges," Wing is one of only two other survivors (along with Goto Dengo and a Filipino worker named Bong) of the Japanese gold burial project, and he competes with Goto and Epiphyte(2) to recover the buried treasure. Although Root and Wing do not meet during the action of the novel, Randy reflects that "it is hard not to get the idea that Enoch Root and General Wing may have other reasons to be pissed off at each other."
Douglas (Doug) MacArthur Shaftoe, son of Bobby Shaftoe and Glory Altamira, is introduced near the end of the World War II storyline as a toddler during the Liberation of Manila, when he first meets his father, who tries to explain Shaftoe family heritage, including their enthusiasm for "displaying adaptability." In the modern-day story line, Doug is a retired U.S. Navy SEAL officer and Annapolis graduate, who lives in the Philippines and operates Semper Marine Services, an underwater survey business with his daughter, Amy, conducting treasure hunts as a sideline.
Dr. Günter Enoch Bobby "G.E.B." Kivistik is introduced in the modern storyline as a smug, Oxford-educated liberal-arts professor from Yale who recruits, and later seduces, Randy Waterhouse's girlfriend, Charlene. In the World War II storyline he is the unborn son of Julieta Kivistik and one of three possible fathers (hence his unusual name) including Günter Bischoff, Enoch Root and Bobby Shaftoe. He is a minor character in Cryptonomicon, but both his [impending] birth and his participation in Charlene's "War as Text" conference catalyze major plot developments.
Mary cCmndhd Waterhouse, Randy's Australian-born, Qwghlmian grandmother and Lawrence's wife.
Technical content
Portions of Cryptonomicon contain large amounts of exposition. Several pages are spent explaining in detail some of the concepts behind cryptography and data storage security, including a description of Van Eck phreaking.
Cryptography
Pontifex Cipher
Stephenson also includes a precise description of (and even Perl script for) the Solitaire (or Pontifex) cipher, a cryptographic algorithm developed by Bruce Schneier for use with a deck of playing cards, as part of the plot. The perl script was written by cryptographer and cypherpunk Ian Goldberg.
#!/usr/bin/perl -s
$f=$d?-1:1;$D=pack('C*',33..86);$p=shift;
$p=~y/a-z/A-Z/;$U='$D=~s/(.*)U$/U$1/;
$D=~s/U(.)/$1U/;';($V=$U)=~s/U/V/g;
$p=~s/[A-Z]/$k=ord($&)-64,&e/eg;$k=0;
while(<>){y/a-z/A-Z/;y/A-Z//dc;$o.=$_}$o.='X'
while length ($o)%5&&!$d;
$o=~s/./chr(($f*&e+ord($&)-l3)%26+65)/eg;
$o=~s/X*$// if $d;$o=~s/.{5}/$& /g;
print"$o\n";sub v{$v=ord(substr($D,$_[0]))-32;
$v>53?53:$v}
sub w{$D=~s/(.{$_[0]})(.*)(.)/$2$1$3/}
sub e{eval"$U$V$V";$D=~s/(.*)([UV].*[UV])(.*)/$3$2$l/;
&w(&v(53));$k?(&w($k)):($c=&v(&v(0)),$c>52?&e:$c)}
In the first printing of Cryptonomicon, the script contained a syntax error in a substitution operator which prevented it from running. This was fixed in subsequent printings.
A verbose and annotated version of the script appeared for some time on Bruce Schneier's web site.
One-time pad
Several of the characters in the book communicate with each other through the use of one-time pads. A one-time pad (OTP) is an encryption technique that requires a single-use pre-shared key of at least the same length as the encrypted message.
The story posits a variation of the OTP technique wherein there is no pre-shared key - the key is instead generated algorithmically.
Software
Finux
He also describes computers using a fictional operating system, Finux. The name is a thinly veiled reference to Linux, a kernel originally written by the Finnish native Linus Torvalds. Stephenson changed the name so as not to be creatively constrained by the technical details of Linux-based operating systems.
Other technology
Carbon arc lamp
The Dun improved galvanic element
Mercury acoustic delay-line computer memory
Allusions and references from other works
An excerpt from Cryptonomicon was originally published in the short story collection Disco 2000, edited by Sarah Champion and published in 1998. Stephenson's subsequent work, a trio of novels dubbed The Baroque Cycle, provides part of the deep backstory to the characters and events featured in Cryptonomicon. Set in the late 17th and early 18th centuries, the novels feature ancestors of several characters in Cryptonomicon, as well as events and objects which affect the action of the later-set book. The subtext implies the existence of secret societies or conspiracies, and familial tendencies and groupings found within those darker worlds.
The short story "Jipi and the Paranoid Chip" takes place some time after the events of Cryptonomicon. In the story, the construction of the Crypt has triggered economic growth in Manila and Kinakuta, in which Goto Engineering, and Homa/Homer Goto, a Goto family heir, are involved. The IDTRO ("Black Chamber") is also mentioned.
Stephenson's 2019 novel, Fall; or, Dodge in Hell, is promoted as a sequel to Reamde (2011), but as the story unfolds, it is revealed that Fall, Reamde, Cryptonomicon and The Baroque Cycle are all set in the same fictional universe, with references to the Waterhouse, Shaftoe and Hacklheber families, as well as Societas Eruditorum and Epiphyte Corporation. Two "Wise" entities from The Baroque Cycle also appear in Fall, including Enoch Root.
Peter Thiel states in his book Zero to One that Cryptonomicon was required reading during the early days of PayPal.
Literary significance and criticism
According to critic Jay Clayton, the book is written for a technical or geek audience. Despite the technical detail, the book drew praise from both Stephenson's science fiction fan base and literary critics and buyers. In his book Charles Dickens in Cyberspace: The Afterlife of the Nineteenth Century in Postmodern Culture (2003), Jay Clayton calls Stephenson's book the “ultimate geek novel” and draws attention to the “literary-scientific-engineering-military-industrial-intelligence alliance” that produced discoveries in two eras separated by fifty years, World War II and the Internet age. In July 2012, io9 included the book on its list of "10 Science Fiction Novels You Pretend to Have Read".
Awards and nominations
Editions
: Hardcover (1999)
: Paperback (2000)
: Audio Cassette (abridged) (2001)
: Mass Market Paperback (2002)
E-book editions for Adobe Reader, Amazon Kindle, Barnes and Noble Nook, Kobo eReader, and Microsoft Reader
Unabridged audio download from iTunes and Audible.com
Translations into other languages: Czech, Danish, Dutch, French, German, Italian, Japanese, Korean, Polish, Russian, Spanish. The Danish, French, and Spanish translations divide the book into three volumes. The Japanese translation divides the book into four volumes.
See also
Fort Drum (Manila Bay), the "concrete battleship"
Cryptocurrencies
Operation Mincemeat
References
External links
The Solitaire Encryption Algorithm, developed by Bruce Schneier
1999 American novels
1999 science fiction novels
Novels by Neal Stephenson
The Baroque Cycle
Novels about cryptography
Novels set during World War II
U-boat fiction
Novels about computing
Novels about submarine warfare
American science fiction novels
Novels set in Buckinghamshire
Novels set in fictional countries
Cultural depictions of Isoroku Yamamoto
Cultural depictions of Hermann Göring
Cultural depictions of Douglas MacArthur
Cultural depictions of Ronald Reagan
Cultural depictions of Albert Einstein
Cultural depictions of Alan Turing
Avon (publisher) books |
21863 | https://en.wikipedia.org/wiki/Netscape%20Navigator | Netscape Navigator | Netscape Navigator was a proprietary web browser, and the original browser of the Netscape line, from versions 1 to 4.08, and 9.x. It was the flagship product of the Netscape Communications Corp and was the dominant web browser in terms of usage share in the 1990s, but by around 2003 its use had almost disappeared. This was partly because the Netscape Corporation (later purchased by AOL) did not sustain Netscape Navigator's technical innovation in the late 1990s.
The business demise of Netscape was a central premise of Microsoft's antitrust trial, wherein the Court ruled that Microsoft's bundling of Internet Explorer with the Windows operating system was a monopolistic and illegal business practice. The decision came too late for Netscape, however, as Internet Explorer had by then become the dominant web browser in Windows.
The Netscape Navigator web browser was succeeded by the Netscape Communicator suite in 1997. Netscape Communicator's 4.x source code was the base for the Netscape-developed Mozilla Application Suite, which was later renamed SeaMonkey. Netscape's Mozilla Suite also served as the base for a browser-only spinoff called Mozilla Firefox.
The Netscape Navigator name returned in 2007 when AOL announced version 9 of the Netscape series of browsers, Netscape Navigator 9. On 28 December 2007, AOL canceled its development but continued supporting the web browser with security updates until 1 March 2008. AOL allows downloading of archived versions of the Netscape Navigator web browser family. AOL maintains the Netscape website as an Internet portal.
History and development
Origin
Netscape Navigator was inspired by the success of the Mosaic web browser, which was co-written by Marc Andreessen, a part-time employee of the National Center for Supercomputing Applications at the University of Illinois. After Andreessen graduated in 1993, he moved to California and there met Jim Clark, the recently departed founder of Silicon Graphics. Clark believed that the Mosaic browser had great commercial possibilities and provided the seed money. Soon Mosaic Communications Corporation was in business in Mountain View, California, with Andreessen as a vice-president. Since the University of Illinois was unhappy with the company's use of the Mosaic name, the company changed its name to Netscape Communications (suggested by product manager Greg Sands) and named its flagship web browser Netscape Navigator.
Netscape announced in its first press release (13 October 1994) that it would make Navigator available without charge to all non-commercial users, and beta versions of version 1.0 and 1.1 were indeed freely downloadable in November 1994 and March 1995, with the full version 1.0 available in December 1994. Netscape's initial corporate policy regarding Navigator claimed that it would make Navigator freely available for non-commercial use in accordance with the notion that Internet software should be distributed for free.
However, within two months of that press release, Netscape apparently reversed its policy on who could freely obtain and use version 1.0 by only mentioning that educational and non-profit institutions could use version 1.0 at no charge.
The reversal was complete with the availability of version 1.1 beta on 6 March 1995, in which a press release states that the final 1.1 release would be available at no cost only for academic and non-profit organizational use. Gone was the notion expressed in the first press release that Navigator would be freely available in the spirit of Internet software.
Some security experts and cryptographers found out that all released Netscape versions had major security problems with crashing the browser with long URLs and 40 bits encryption keys.
The first few releases of the product were made available in "commercial" and "evaluation" versions; for example, version "1.0" and version "1.0N". The "N" evaluation versions were completely identical to the commercial versions; the letter was there to remind people to pay for the browser once they felt they had tried it long enough and were satisfied with it. This distinction was formally dropped within a year of the initial release, and the full version of the browser continued to be made available for free online, with boxed versions available on floppy disks (and later CDs) in stores along with a period of phone support. During this era, "Internet Starter Kit" books were popular, and usually included a floppy disk or CD containing internet software, and this was a popular means of obtaining Netscape's and other browsers. Email support was initially free and remained so for a year or two until the volume of support requests grew too high.
During development, the Netscape browser was known by the code name Mozilla, which became the name of a Godzilla-like cartoon dragon mascot used prominently on the company's web site. The Mozilla name was also used as the User-Agent in HTTP requests by the browser. Other web browsers claimed to be compatible with Netscape's extensions to HTML and therefore used the same name in their User-Agent identifiers so that web servers would send them the same pages as were sent to Netscape browsers. Mozilla is now a generic name for matters related to the open source successor to Netscape Communicator and is most identified with the browser Firefox.
Rise of Netscape
When the consumer Internet revolution arrived in the mid-1990s, Netscape was well-positioned to take advantage of it. With a good mix of features and an attractive licensing scheme that allowed free use for non-commercial purposes, the Netscape browser soon became the de facto standard, particularly on the Windows platform. Internet service providers and computer magazine publishers helped make Navigator readily available.
An innovation that Netscape introduced in 1994 was the on-the-fly display of web pages, where text and graphics appeared on the screen as the web page downloaded. Earlier web browsers would not display a page until all graphics on it had been loaded over the network connection; this meant a user might have only a blank page for several minutes. With Netscape, people using dial-up connections could begin reading the text of a web page within seconds of entering a web address, even before the rest of the text and graphics had finished downloading. This made the web much more tolerable to the average user.
Through the late 1990s, Netscape made sure that Navigator remained the technical leader among web browsers. New features included cookies, frames, proxy auto-config, and JavaScript (in version 2.0). Although those and other innovations eventually became open standards of the W3C and ECMA and were emulated by other browsers, they were often viewed as controversial. Netscape, according to critics, was more interested in bending the web to its own de facto "standards" (bypassing standards committees and thus marginalizing the commercial competition) than it was in fixing bugs in its products. Consumer rights advocates were particularly critical of cookies and of commercial web sites using them to invade individual privacy.
In the marketplace, however, these concerns made little difference. Netscape Navigator remained the market leader with more than 50% usage share. The browser software was available for a wide range of operating systems, including Windows (3.1, 95, 98, NT), Macintosh, Linux, OS/2, and many versions of Unix including OSF/1, Sun Solaris, BSD/OS, IRIX, AIX, and HP-UX, and looked and worked nearly identically on every one of them. Netscape began to experiment with prototypes of a web-based system, known internally as "Constellation", which would allow a user to access and edit his or her files anywhere across a network no matter what computer or operating system he or she happened to be using.
Industry observers forecast the dawn of a new era of connected computing. The underlying operating system, it was believed, would not be an important consideration; future applications would run within a web browser. This was seen by Netscape as a clear opportunity to entrench Navigator at the heart of the next generation of computing, and thus gain the opportunity to expand into all manner of other software and service markets.
Decline
With the success of Netscape showing the importance of the web (more people were using the Internet due in part to the ease of using Netscape), Internet browsing began to be seen as a potentially profitable market. Following Netscape's lead, Microsoft started a campaign to enter the web browser software market. Like Netscape before them, Microsoft licensed the Mosaic source code from Spyglass, Inc. (which in turn licensed code from University of Illinois). Using this basic code, Microsoft created Internet Explorer (IE).
The competition between Microsoft and Netscape dominated the Browser Wars. Internet Explorer, Version 1.0 (shipped in the Internet Jumpstart Kit in Microsoft Plus! For Windows 95) and IE, Version 2.0 (the first cross-platform version of the web browser, supporting both Windows and Mac OS) were thought by many to be inferior and primitive when compared to contemporary versions of Netscape Navigator. With the release of IE version 3.0 (1996) Microsoft was able to catch up with Netscape competitively, with IE Version 4.0 (1997) further improvement in terms of market share. IE 5.0 (1999) improved stability and took significant market share from Netscape Navigator for the first time.
There were two versions of Netscape Navigator 3.0, the Standard Edition and the Gold Edition. The latter consisted of the Navigator browser with e-mail, news readers, and a WYSIWYG web page compositor; however, these extra functions enlarged and slowed the software, rendering it prone to crashing.
This Gold Edition was renamed Netscape Communicator starting with version 4.0; the name change diluted its name-recognition and confused users. Netscape CEO James L. Barksdale insisted on the name change because Communicator was a general-purpose client application, which contained the Navigator browser.
The aging Netscape Communicator 4.x was slower than Internet Explorer 5.0. Typical web pages had become heavily illustrated, often JavaScript-intensive, and encoded with HTML features designed for specific purposes but now employed as global layout tools (HTML tables, the most obvious example of this, were especially difficult for Communicator to render). The Netscape browser, once a solid product, became crash-prone and buggy; for example, some versions re-downloaded an entire web page to re-render it when the browser window was re-sized (a nuisance to dial-up users), and the browser would usually crash when the page contained simple Cascading Style Sheets, as proper support for CSS never made it into Communicator 4.x. At the time that Communicator 4.0 was being developed, Netscape had a competing technology called JavaScript Style Sheets. Near the end of the development cycle, it became obvious that CSS would prevail, so Netscape quickly implemented a CSS to JSSS converter, which then processed CSS as JSSS (this is why turning JavaScript off also disabled CSS). Moreover, Netscape Communicator's browser interface design appeared dated in comparison to Internet Explorer and interface changes in Microsoft and Apple's operating systems.
By the end of the decade, Netscape's web browser had lost dominance over the Windows platform, and the August 1997 Microsoft financial agreement to invest one hundred and fifty million dollars in Apple required that Apple make Internet Explorer the default web browser in new Mac OS distributions. The latest IE Mac release at that time was Internet Explorer version 3.0 for Macintosh, but Internet Explorer 4 was released later that year.
Microsoft succeeded in having ISPs and PC vendors distribute Internet Explorer to their customers instead of Netscape Navigator, mostly due to Microsoft using its leverage from Windows OEM licenses, and partly aided by Microsoft's investment in making IE brandable, such that a customized version of IE could be offered. Also, web developers used proprietary, browser-specific extensions in web pages. Both Microsoft and Netscape did this, having added many proprietary HTML tags to their browsers, which forced users to choose between two competing and almost incompatible web browsers.
In March 1998, Netscape released most of the development code base for Netscape Communicator under an open source license. Only pre-alpha versions of Netscape 5 were released before the open source community decided to scrap the Netscape Navigator codebase entirely and build a new web browser around the Gecko layout engine which Netscape had been developing but which had not yet incorporated. The community-developed open source project was named Mozilla, Netscape Navigator's original code name. America Online bought Netscape; Netscape programmers took a pre-beta-quality form of the Mozilla codebase, gave it a new GUI, and released it as Netscape 6. This did nothing to win back users, who continued to migrate to Internet Explorer. After the release of Netscape 7 and a long public beta test, Mozilla 1.0 was released on 5 June 2002. The same code-base, notably the Gecko layout engine, became the basis of independent applications, including Firefox and Thunderbird.
On 28 December 2007, the Netscape developers announced that AOL had canceled development of Netscape Navigator, leaving it unsupported as of 1 March 2008. Archived and unsupported versions of the browser remain available for download.
Legacy
Netscape's contributions to the web include JavaScript, which was submitted as a new standard to Ecma International. The resultant ECMAScript specification allowed JavaScript support by multiple web browsers and its use as a cross-browser scripting language, long after Netscape Navigator itself had dropped in popularity. Another example is the FRAME tag, which is widely supported today, and that has been incorporated into official web standards such as the "HTML 4.01 Frameset" specification.
In a 2007 PC World column, the original Netscape Navigator was considered the "best tech product of all time" due to its impact on the Internet.
See also
Timeline of web browsers
Comparison of web browsers
List of web browsers
Netscape
Mosaic
Mozilla
Lou Montulli
References
External links
Notice for Netscape Navigator 2.02 for OS/2 and Netscape Communicator 4.04 for OS/2 Users
The hidden features of Netscape Navigator 3.0
Netscape Browser Archive - Early Netscape, SillyDog701
1994 software
Cross-platform web browsers
Discontinued web browsers
Gopher clients
History of web browsers
Netscape
OS/2 web browsers
POSIX web browsers |
21918 | https://en.wikipedia.org/wiki/Normal%20subgroup | Normal subgroup | In abstract algebra, a normal subgroup (also known as an invariant subgroup or self-conjugate subgroup) is a subgroup that is invariant under conjugation by members of the group of which it is a part. In other words, a subgroup of the group is normal in if and only if for all and The usual notation for this relation is
Normal subgroups are important because they (and only they) can be used to construct quotient groups of the given group. Furthermore, the normal subgroups of are precisely the kernels of group homomorphisms with domain which means that they can be used to internally classify those homomorphisms.
Évariste Galois was the first to realize the importance of the existence of normal subgroups.
Definitions
A subgroup of a group is called a normal subgroup of if it is invariant under conjugation; that is, the conjugation of an element of by an element of is always in The usual notation for this relation is
Equivalent conditions
For any subgroup of the following conditions are equivalent to being a normal subgroup of Therefore, any one of them may be taken as the definition:
The image of conjugation of by any element of is a subset of
The image of conjugation of by any element of is equal to
For all the left and right cosets and are equal.
The sets of left and right cosets of in coincide.
The product of an element of the left coset of with respect to and an element of the left coset of with respect to is an element of the left coset of with respect to : for all if and then
is a union of conjugacy classes of
is preserved by the inner automorphisms of
There is some group homomorphism whose kernel is
For all and the commutator is in
Any two elements commute regarding the normal subgroup membership relation: for all if and only if
Examples
For any group the trivial subgroup consisting of just the identity element of is always a normal subgroup of Likewise, itself is always a normal subgroup of (If these are the only normal subgroups, then is said to be simple.) Other named normal subgroups of an arbitrary group include the center of the group (the set of elements that commute with all other elements) and the commutator subgroup More generally, since conjugation is an isomorphism, any characteristic subgroup is a normal subgroup.
If is an abelian group then every subgroup of is normal, because A group that is not abelian but for which every subgroup is normal is called a Hamiltonian group.
A concrete example of a normal subgroup is the subgroup of the symmetric group consisting of the identity and both three-cycles. In particular, one can check that every coset of is either equal to itself or is equal to On the other hand, the subgroup is not normal in since This illustrates the general fact that any subgroup of index two is normal.
In the Rubik's Cube group, the subgroups consisting of operations which only affect the orientations of either the corner pieces or the edge pieces are normal.
The translation group is a normal subgroup of the Euclidean group in any dimension. This means: applying a rigid transformation, followed by a translation and then the inverse rigid transformation, has the same effect as a single translation. By contrast, the subgroup of all rotations about the origin is not a normal subgroup of the Euclidean group, as long as the dimension is at least 2: first translating, then rotating about the origin, and then translating back will typically not fix the origin and will therefore not have the same effect as a single rotation about the origin.
Properties
If is a normal subgroup of and is a subgroup of containing then is a normal subgroup of
A normal subgroup of a normal subgroup of a group need not be normal in the group. That is, normality is not a transitive relation. The smallest group exhibiting this phenomenon is the dihedral group of order 8. However, a characteristic subgroup of a normal subgroup is normal. A group in which normality is transitive is called a T-group.
The two groups and are normal subgroups of their direct product
If the group is a semidirect product then is normal in though need not be normal in
Normality is preserved under surjective homomorphisms; that is, if is a surjective group homomorphism and is normal in then the image is normal in
Normality is preserved by taking inverse images; that is, if is a group homomorphism and is normal in then the inverse image is normal in
Normality is preserved on taking direct products; that is, if and then
Every subgroup of index 2 is normal. More generally, a subgroup, of finite index, in contains a subgroup, normal in and of index dividing called the normal core. In particular, if is the smallest prime dividing the order of then every subgroup of index is normal.
The fact that normal subgroups of are precisely the kernels of group homomorphisms defined on accounts for some of the importance of normal subgroups; they are a way to internally classify all homomorphisms defined on a group. For example, a non-identity finite group is simple if and only if it is isomorphic to all of its non-identity homomorphic images, a finite group is perfect if and only if it has no normal subgroups of prime index, and a group is imperfect if and only if the derived subgroup is not supplemented by any proper normal subgroup.
Lattice of normal subgroups
Given two normal subgroups, and of their intersection and their product are also normal subgroups of
The normal subgroups of form a lattice under subset inclusion with least element, and greatest element, The meet of two normal subgroups, and in this lattice is their intersection and the join is their product.
The lattice is complete and modular.
Normal subgroups, quotient groups and homomorphisms
If is a normal subgroup, we can define a multiplication on cosets as follows:
This relation defines a mapping To show that this mapping is well-defined, one needs to prove that the choice of representative elements does not affect the result. To this end, consider some other representative elements Then there are such that It follows that where we also used the fact that is a subgroup, and therefore there is such that This proves that this product is a well-defined mapping between cosets.
With this operation, the set of cosets is itself a group, called the quotient group and denoted with There is a natural homomorphism, given by This homomorphism maps into the identity element of which is the coset that is,
In general, a group homomorphism, sends subgroups of to subgroups of Also, the preimage of any subgroup of is a subgroup of We call the preimage of the trivial group in the kernel of the homomorphism and denote it by As it turns out, the kernel is always normal and the image of is always isomorphic to (the first isomorphism theorem). In fact, this correspondence is a bijection between the set of all quotient groups of and the set of all homomorphic images of (up to isomorphism). It is also easy to see that the kernel of the quotient map, is itself, so the normal subgroups are precisely the kernels of homomorphisms with domain
See also
Operations taking subgroups to subgroups
Normalizer
Conjugate closure
Normal core
Subgroup properties complementary (or opposite) to normality
Malnormal subgroup
Contranormal subgroup
Abnormal subgroup
Self-normalizing subgroup
Subgroup properties stronger than normality
Characteristic subgroup
Fully characteristic subgroup
Subgroup properties weaker than normality
Subnormal subgroup
Ascendant subgroup
Descendant subgroup
Quasinormal subgroup
Seminormal subgroup
Conjugate permutable subgroup
Modular subgroup
Pronormal subgroup
Paranormal subgroup
Polynormal subgroup
C-normal subgroup
Related notions in algebra
Ideal (ring theory)
Notes
References
Further reading
I. N. Herstein, Topics in algebra. Second edition. Xerox College Publishing, Lexington, Mass.-Toronto, Ont., 1975. xi+388 pp.
External links
Normal subgroup in Springer's Encyclopedia of Mathematics
Robert Ash: Group Fundamentals in Abstract Algebra. The Basic Graduate Year
Timothy Gowers, Normal subgroups and quotient groups
John Baez, What's a Normal Subgroup?
Subgroup properties |
21939 | https://en.wikipedia.org/wiki/National%20Security%20Agency | National Security Agency | The National Security Agency (NSA) is a national-level intelligence agency of the United States Department of Defense, under the authority of the Director of National Intelligence (DNI). The NSA is responsible for global monitoring, collection, and processing of information and data for foreign and domestic intelligence and counterintelligence purposes, specializing in a discipline known as signals intelligence (SIGINT). The NSA is also tasked with the protection of U.S. communications networks and information systems. The NSA relies on a variety of measures to accomplish its mission, the majority of which are clandestine. The existence of the NSA was not revealed until 1975.
Originating as a unit to decipher coded communications in World War II, it was officially formed as the NSA by President Harry S. Truman in 1952. Between then and the end of the Cold War, it became the largest of the U.S. intelligence organizations in terms of personnel and budget, but information available as of 2013 indicates that the CIA pulled ahead in this regard, with a budget of $14.7 billion. The NSA currently conducts worldwide mass data collection and has been known to physically bug electronic systems as one method to this end. The NSA is also alleged to have been behind such attack software as Stuxnet, which severely damaged Iran's nuclear program. The NSA, alongside the Central Intelligence Agency (CIA), maintains a physical presence in many countries across the globe; the CIA/NSA joint Special Collection Service (a highly classified intelligence team) inserts eavesdropping devices in high value targets (such as presidential palaces or embassies). SCS collection tactics allegedly encompass "close surveillance, burglary, wiretapping, [and] breaking and entering".
Unlike the CIA and the Defense Intelligence Agency (DIA), both of which specialize primarily in foreign human espionage, the NSA does not publicly conduct human-source intelligence gathering. The NSA is entrusted with providing assistance to, and the coordination of, SIGINT elements for other government organizations – which are prevented by law from engaging in such activities on their own. As part of these responsibilities, the agency has a co-located organization called the Central Security Service (CSS), which facilitates cooperation between the NSA and other U.S. defense cryptanalysis components. To further ensure streamlined communication between the signals intelligence community divisions, the NSA Director simultaneously serves as the Commander of the United States Cyber Command and as Chief of the Central Security Service.
The NSA's actions have been a matter of political controversy on several occasions, including its spying on anti–Vietnam War leaders and the agency's participation in economic espionage. In 2013, the NSA had many of its secret surveillance programs revealed to the public by Edward Snowden, a former NSA contractor. According to the leaked documents, the NSA intercepts and stores the communications of over a billion people worldwide, including United States citizens. The documents also revealed the NSA tracks hundreds of millions of people's movements using cellphones' metadata. Internationally, research has pointed to the NSA's ability to surveil the domestic Internet traffic of foreign countries through "boomerang routing".
History
Formation
The origins of the National Security Agency can be traced back to April 28, 1917, three weeks after the U.S. Congress declared war on Germany in World War I. A code and cipher decryption unit was established as the Cable and Telegraph Section which was also known as the Cipher Bureau. It was headquartered in Washington, D.C. and was part of the war effort under the executive branch without direct Congressional authorization. During the course of the war, it was relocated in the army's organizational chart several times. On July 5, 1917, Herbert O. Yardley was assigned to head the unit. At that point, the unit consisted of Yardley and two civilian clerks. It absorbed the Navy's cryptanalysis functions in July 1918. World War I ended on November 11, 1918, and the army cryptographic section of Military Intelligence (MI-8) moved to New York City on May 20, 1919, where it continued intelligence activities as the Code Compilation Company under the direction of Yardley.
The Black Chamber
After the disbandment of the U.S. Army cryptographic section of military intelligence, known as MI-8, in 1919, the U.S. government created the Cipher Bureau, also known as Black Chamber. The Black Chamber was the United States' first peacetime cryptanalytic organization. Jointly funded by the Army and the State Department, the Cipher Bureau was disguised as a New York City commercial code company; it actually produced and sold such codes for business use. Its true mission, however, was to break the communications (chiefly diplomatic) of other nations. At the Washington Naval Conference, it aided American negotiators by providing them with the decrypted traffic of many of the conference delegations, including the Japanese. The Black Chamber successfully persuaded Western Union, the largest U.S. telegram company at the time, as well as several other communications companies to illegally give the Black Chamber access to cable traffic of foreign embassies and consulates. Soon, these companies publicly discontinued their collaboration.
Despite the Chamber's initial successes, it was shut down in 1929 by U.S. Secretary of State Henry L. Stimson, who defended his decision by stating, "Gentlemen do not read each other's mail."
World War II and its aftermath
During World War II, the Signal Intelligence Service (SIS) was created to intercept and decipher the communications of the Axis powers. When the war ended, the SIS was reorganized as the Army Security Agency (ASA), and it was placed under the leadership of the Director of Military Intelligence.
On May 20, 1949, all cryptologic activities were centralized under a national organization called the Armed Forces Security Agency (AFSA). This organization was originally established within the U.S. Department of Defense under the command of the Joint Chiefs of Staff. The AFSA was tasked to direct Department of Defense communications and electronic intelligence activities, except those of U.S. military intelligence units. However, the AFSA was unable to centralize communications intelligence and failed to coordinate with civilian agencies that shared its interests such as the Department of State, Central Intelligence Agency (CIA) and the Federal Bureau of Investigation (FBI). In December 1951, President Harry S. Truman ordered a panel to investigate how AFSA had failed to achieve its goals. The results of the investigation led to improvements and its redesignation as the National Security Agency.
The National Security Council issued a memorandum of October 24, 1952, that revised National Security Council Intelligence Directive (NSCID) 9. On the same day, Truman issued a second memorandum that called for the establishment of the NSA. The actual establishment of the NSA was done by a November 4 memo by Robert A. Lovett, the Secretary of Defense, changing the name of the AFSA to the NSA, and making the new agency responsible for all communications intelligence. Since President Truman's memo was a classified document, the existence of the NSA was not known to the public at that time. Due to its ultra-secrecy the U.S. intelligence community referred to the NSA as "No Such Agency".
Vietnam War
In the 1960s, the NSA played a key role in expanding U.S. commitment to the Vietnam War by providing evidence of a North Vietnamese attack on the American destroyer during the Gulf of Tonkin incident.
A secret operation, code-named "MINARET", was set up by the NSA to monitor the phone communications of Senators Frank Church and Howard Baker, as well as key leaders of the civil rights movement, including Martin Luther King Jr., and prominent U.S. journalists and athletes who criticized the Vietnam War. However, the project turned out to be controversial, and an internal review by the NSA concluded that its Minaret program was "disreputable if not outright illegal".
The NSA mounted a major effort to secure tactical communications among U.S. forces during the war with mixed success. The NESTOR family of compatible secure voice systems it developed was widely deployed during the Vietnam War, with about 30,000 NESTOR sets produced. However, a variety of technical and operational problems limited their use, allowing the North Vietnamese to exploit and intercept U.S. communications.
Church Committee hearings
In the aftermath of the Watergate scandal, a congressional hearing in 1975 led by Senator Frank Church revealed that the NSA, in collaboration with Britain's SIGINT intelligence agency Government Communications Headquarters (GCHQ), had routinely intercepted the international communications of prominent anti-Vietnam war leaders such as Jane Fonda and Dr. Benjamin Spock. The Agency tracked these individuals in a secret filing system that was destroyed in 1974. Following the resignation of President Richard Nixon, there were several investigations of suspected misuse of FBI, CIA and NSA facilities. Senator Frank Church uncovered previously unknown activity, such as a CIA plot (ordered by the administration of President John F. Kennedy) to assassinate Fidel Castro. The investigation also uncovered NSA's wiretaps on targeted U.S. citizens.
After the Church Committee hearings, the Foreign Intelligence Surveillance Act of 1978 was passed. This was designed to limit the practice of mass surveillance in the United States.
From 1980s to 1990s
In 1986, the NSA intercepted the communications of the Libyan government during the immediate aftermath of the Berlin discotheque bombing. The White House asserted that the NSA interception had provided "irrefutable" evidence that Libya was behind the bombing, which U.S. President Ronald Reagan cited as a justification for the 1986 United States bombing of Libya.
In 1999, a multi-year investigation by the European Parliament highlighted the NSA's role in economic espionage in a report entitled 'Development of Surveillance Technology and Risk of Abuse of Economic Information'. That year, the NSA founded the NSA Hall of Honor, a memorial at the National Cryptologic Museum in Fort Meade, Maryland. The memorial is a, "tribute to the pioneers and heroes who have made significant and long-lasting contributions to American cryptology". NSA employees must be retired for more than fifteen years to qualify for the memorial.
NSA's infrastructure deteriorated in the 1990s as defense budget cuts resulted in maintenance deferrals. On January 24, 2000, NSA headquarters suffered a total network outage for three days caused by an overloaded network. Incoming traffic was successfully stored on agency servers, but it could not be directed and processed. The agency carried out emergency repairs at a cost of $3 million to get the system running again. (Some incoming traffic was also directed instead to Britain's GCHQ for the time being.) Director Michael Hayden called the outage a "wake-up call" for the need to invest in the agency's infrastructure.
In the 1990s the defensive arm of the NSA—the Information Assurance Directorate (IAD)—started working more openly; the first public technical talk by an NSA scientist at a major cryptography conference was J. Solinas' presentation on efficient Elliptic Curve Cryptography algorithms at Crypto 1997. The IAD's cooperative approach to academia and industry culminated in its support for a transparent process for replacing the outdated Data Encryption Standard (DES) by an Advanced Encryption Standard (AES). Cybersecurity policy expert Susan Landau attributes the NSA's harmonious collaboration with industry and academia in the selection of the AES in 2000—and the Agency's support for the choice of a strong encryption algorithm designed by Europeans rather than by Americans—to Brian Snow, who was the Technical Director of IAD and represented the NSA as cochairman of the Technical Working Group for the AES competition, and Michael Jacobs, who headed IAD at the time.
After the terrorist attacks of September 11, 2001, the NSA believed that it had public support for a dramatic expansion of its surveillance activities. According to Neal Koblitz and Alfred Menezes, the period when the NSA was a trusted partner with academia and industry in the development of cryptographic standards started to come to an end when, as part of the change in the NSA in the post-September 11 era, Snow was replaced as Technical Director, Jacobs retired, and IAD could no longer effectively oppose proposed actions by the offensive arm of the NSA.
War on Terror
In the aftermath of the September 11 attacks, the NSA created new IT systems to deal with the flood of information from new technologies like the Internet and cellphones. ThinThread contained advanced data mining capabilities. It also had a "privacy mechanism"; surveillance was stored encrypted; decryption required a warrant. The research done under this program may have contributed to the technology used in later systems. ThinThread was cancelled when Michael Hayden chose Trailblazer, which did not include ThinThread's privacy system.
Trailblazer Project ramped up in 2002 and was worked on by Science Applications International Corporation (SAIC), Boeing, Computer Sciences Corporation, IBM, and Litton Industries. Some NSA whistleblowers complained internally about major problems surrounding Trailblazer. This led to investigations by Congress and the NSA and DoD Inspectors General. The project was cancelled in early 2004.
Turbulence started in 2005. It was developed in small, inexpensive "test" pieces, rather than one grand plan like Trailblazer. It also included offensive cyber-warfare capabilities, like injecting malware into remote computers. Congress criticized Turbulence in 2007 for having similar bureaucratic problems as Trailblazer. It was to be a realization of information processing at higher speeds in cyberspace.
Global surveillance disclosures
The massive extent of the NSA's spying, both foreign and domestic, was revealed to the public in a series of detailed disclosures of internal NSA documents beginning in June 2013. Most of the disclosures were leaked by former NSA contractor Edward Snowden. On 4 September 2020, the NSA's surveillance program was ruled unlawful by the US Court of Appeals. The court also added that the US intelligence leaders, who publicly defended it, were not telling the truth.
Mission
NSA's eavesdropping mission includes radio broadcasting, both from various organizations and individuals, the Internet, telephone calls, and other intercepted forms of communication. Its secure communications mission includes military, diplomatic, and all other sensitive, confidential or secret government communications.
According to a 2010 article in The Washington Post, "[e]very day, collection systems at the National Security Agency intercept and store 1.7 billion e-mails, phone calls and other types of communications. The NSA sorts a fraction of those into 70 separate databases."
Because of its listening task, NSA/CSS has been heavily involved in cryptanalytic research, continuing the work of predecessor agencies which had broken many World War II codes and ciphers (see, for instance, Purple, Venona project, and JN-25).
In 2004, NSA Central Security Service and the National Cyber Security Division of the Department of Homeland Security (DHS) agreed to expand the NSA Centers of Academic Excellence in Information Assurance Education Program.
As part of the National Security Presidential Directive 54/Homeland Security Presidential Directive 23 (NSPD 54), signed on January 8, 2008, by President Bush, the NSA became the lead agency to monitor and protect all of the federal government's computer networks from cyber-terrorism.
A part of NSA's mission is to serve as a combat support agency for the Department of Defense.
Operations
Operations by the National Security Agency can be divided into three types:
Collection overseas, which falls under the responsibility of the Global Access Operations (GAO) division.
Domestic collection, which falls under the responsibility of the Special Source Operations (SSO) division.
Hacking operations, which fall under the responsibility of the Tailored Access Operations (TAO) division.
Collection overseas
Echelon
"Echelon" was created in the incubator of the Cold War. Today it is a legacy system, and several NSA stations are closing.
NSA/CSS, in combination with the equivalent agencies in the United Kingdom (Government Communications Headquarters), Canada (Communications Security Establishment), Australia (Australian Signals Directorate), and New Zealand (Government Communications Security Bureau), otherwise known as the UKUSA group, was reported to be in command of the operation of the so-called ECHELON system. Its capabilities were suspected to include the ability to monitor a large proportion of the world's transmitted civilian telephone, fax and data traffic.
During the early 1970s, the first of what became more than eight large satellite communications dishes were installed at Menwith Hill. Investigative journalist Duncan Campbell reported in 1988 on the "ECHELON" surveillance program, an extension of the UKUSA Agreement on global signals intelligence SIGINT, and detailed how the eavesdropping operations worked. On November 3, 1999, the BBC reported that they had confirmation from the Australian Government of the existence of a powerful "global spying network" code-named Echelon, that could "eavesdrop on every single phone call, fax or e-mail, anywhere on the planet" with Britain and the United States as the chief protagonists. They confirmed that Menwith Hill was "linked directly to the headquarters of the US National Security Agency (NSA) at Fort Meade in Maryland".
NSA's United States Signals Intelligence Directive 18 (USSID 18) strictly prohibited the interception or collection of information about "... U.S. persons, entities, corporations or organizations...." without explicit written legal permission from the United States Attorney General when the subject is located abroad, or the Foreign Intelligence Surveillance Court when within U.S. borders. Alleged Echelon-related activities, including its use for motives other than national security, including political and industrial espionage, received criticism from countries outside the UKUSA alliance.
Other SIGINT operations overseas
The NSA was also involved in planning to blackmail people with "SEXINT", intelligence gained about a potential target's sexual activity and preferences. Those targeted had not committed any apparent crime nor were they charged with one.
In order to support its facial recognition program, the NSA is intercepting "millions of images per day".
The Real Time Regional Gateway is a data collection program introduced in 2005 in Iraq by NSA during the Iraq War that consisted of gathering all electronic communication, storing it, then searching and otherwise analyzing it. It was effective in providing information about Iraqi insurgents who had eluded less comprehensive techniques. This "collect it all" strategy introduced by NSA director, Keith B. Alexander, is believed by Glenn Greenwald of The Guardian to be the model for the comprehensive worldwide mass archiving of communications which NSA is engaged in as of 2013.
A dedicated unit of the NSA locates targets for the CIA for extrajudicial assassination in the Middle East. The NSA has also spied extensively on the European Union, the United Nations and numerous governments including allies and trading partners in Europe, South America and Asia.
In June 2015, WikiLeaks published documents showing that NSA spied on French companies.
In July 2015, WikiLeaks published documents showing that NSA spied on federal German ministries since the 1990s. Even Germany's Chancellor Angela Merkel's cellphones and phone of her predecessors had been intercepted.
Boundless Informant
Edward Snowden revealed in June 2013 that between February 8 and March 8, 2013, the NSA collected about 124.8 billion telephone data items and 97.1 billion computer data items throughout the world, as was displayed in charts from an internal NSA tool codenamed Boundless Informant. Initially, it was reported that some of these data reflected eavesdropping on citizens in countries like Germany, Spain and France, but later on, it became clear that those data were collected by European agencies during military missions abroad and were subsequently shared with NSA.
Bypassing encryption
In 2013, reporters uncovered a secret memo that claims the NSA created and pushed for the adoption of the Dual EC DRBG encryption standard that contained built-in vulnerabilities in 2006 to the United States National Institute of Standards and Technology (NIST), and the International Organization for Standardization (aka ISO). This memo appears to give credence to previous speculation by cryptographers at Microsoft Research. Edward Snowden claims that the NSA often bypasses encryption altogether by lifting information before it is encrypted or after it is decrypted.
XKeyscore rules (as specified in a file xkeyscorerules100.txt, sourced by German TV stations NDR and WDR, who claim to have excerpts from its source code) reveal that the NSA tracks users of privacy-enhancing software tools, including Tor; an anonymous email service provided by the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) in Cambridge, Massachusetts; and readers of the Linux Journal.
Software backdoors
Linus Torvalds, the founder of Linux kernel, joked during a LinuxCon keynote on September 18, 2013, that the NSA, who are the founder of SELinux, wanted a backdoor in the kernel. However, later, Linus' father, a Member of the European Parliament (MEP), revealed that the NSA actually did this.
IBM Notes was the first widely adopted software product to use public key cryptography for client–server and server–server authentication and for encryption of data. Until US laws regulating encryption were changed in 2000, IBM and Lotus were prohibited from exporting versions of Notes that supported symmetric encryption keys that were longer than 40 bits. In 1997, Lotus negotiated an agreement with the NSA that allowed the export of a version that supported stronger keys with 64 bits, but 24 of the bits were encrypted with a special key and included in the message to provide a "workload reduction factor" for the NSA. This strengthened the protection for users of Notes outside the US against private-sector industrial espionage, but not against spying by the US government.
Boomerang routing
While it is assumed that foreign transmissions terminating in the U.S. (such as a non-U.S. citizen accessing a U.S. website) subject non-U.S. citizens to NSA surveillance, recent research into boomerang routing has raised new concerns about the NSA's ability to surveil the domestic Internet traffic of foreign countries. Boomerang routing occurs when an Internet transmission that originates and terminates in a single country transits another. Research at the University of Toronto has suggested that approximately 25% of Canadian domestic traffic may be subject to NSA surveillance activities as a result of the boomerang routing of Canadian Internet service providers.
Hardware implanting
A document included in NSA files released with Glenn Greenwald's book No Place to Hide details how the agency's Tailored Access Operations (TAO) and other NSA units gain access to hardware. They intercept routers, servers and other network hardware being shipped to organizations targeted for surveillance and install covert implant firmware onto them before they are delivered. This was described by an NSA manager as "some of the most productive operations in TAO because they preposition access points into hard target networks around the world."
Computers seized by the NSA due to interdiction are often modified with a physical device known as Cottonmouth. Cottonmouth is a device that can be inserted in the USB port of a computer in order to establish remote access to the targeted machine. According to NSA's Tailored Access Operations (TAO) group implant catalog, after implanting Cottonmouth, the NSA can establish a network bridge "that allows the NSA to load exploit software onto modified computers as well as allowing the NSA to relay commands and data between hardware and software implants."
Domestic collection
NSA's mission, as set forth in Executive Order 12333 in 1981, is to collect information that constitutes "foreign intelligence or counterintelligence" while not "acquiring information concerning the domestic activities of United States persons". NSA has declared that it relies on the FBI to collect information on foreign intelligence activities within the borders of the United States, while confining its own activities within the United States to the embassies and missions of foreign nations.
The appearance of a 'Domestic Surveillance Directorate' of the NSA was soon exposed as a hoax in 2013.
NSA's domestic surveillance activities are limited by the requirements imposed by the Fourth Amendment to the U.S. Constitution. The Foreign Intelligence Surveillance Court for example held in October 2011, citing multiple Supreme Court precedents, that the Fourth Amendment prohibitions against unreasonable searches and seizures apply to the contents of all communications, whatever the means, because "a person's private communications are akin to personal papers." However, these protections do not apply to non-U.S. persons located outside of U.S. borders, so the NSA's foreign surveillance efforts are subject to far fewer limitations under U.S. law. The specific requirements for domestic surveillance operations are contained in the Foreign Intelligence Surveillance Act of 1978 (FISA), which does not extend protection to non-U.S. citizens located outside of U.S. territory.
President's Surveillance Program
George W. Bush, president during the 9/11 terrorist attacks, approved the Patriot Act shortly after the attacks to take anti-terrorist security measures. Title 1, 2, and 9 specifically authorized measures that would be taken by the NSA. These titles granted enhanced domestic security against terrorism, surveillance procedures, and improved intelligence, respectively. On March 10, 2004, there was a debate between President Bush and White House Counsel Alberto Gonzales, Attorney General John Ashcroft, and Acting Attorney General James Comey. The Attorneys General were unsure if the NSA's programs could be considered constitutional. They threatened to resign over the matter, but ultimately the NSA's programs continued. On March 11, 2004, President Bush signed a new authorization for mass surveillance of Internet records, in addition to the surveillance of phone records. This allowed the president to be able to override laws such as the Foreign Intelligence Surveillance Act, which protected civilians from mass surveillance. In addition to this, President Bush also signed that the measures of mass surveillance were also retroactively in place.
One such surveillance program, authorized by the U.S. Signals Intelligence Directive 18 of President George Bush, was the Highlander Project undertaken for the National Security Agency by the U.S. Army 513th Military Intelligence Brigade. NSA relayed telephone (including cell phone) conversations obtained from ground, airborne, and satellite monitoring stations to various U.S. Army Signal Intelligence Officers, including the 201st Military Intelligence Battalion. Conversations of citizens of the U.S. were intercepted, along with those of other nations.
Proponents of the surveillance program claim that the President has executive authority to order such action, arguing that laws such as FISA are overridden by the President's Constitutional powers. In addition, some argued that FISA was implicitly overridden by a subsequent statute, the Authorization for Use of Military Force, although the Supreme Court's ruling in Hamdan v. Rumsfeld deprecates this view.
The PRISM program
Under the PRISM program, which started in 2007, NSA gathers Internet communications from foreign targets from nine major U.S. Internet-based communication service providers: Microsoft, Yahoo, Google, Facebook, PalTalk, AOL, Skype, YouTube and Apple. Data gathered include email, videos, photos, VoIP chats such as Skype, and file transfers.
Former NSA director General Keith Alexander claimed that in September 2009 the NSA prevented Najibullah Zazi and his friends from carrying out a terrorist attack. However, this claim has been debunked and no evidence has been presented demonstrating that the NSA has ever been instrumental in preventing a terrorist attack.
Hacking operations
Besides the more traditional ways of eavesdropping in order to collect signals intelligence, NSA is also engaged in hacking computers, smartphones and their networks. A division which conducts such operations is the [Tailored Access Operations]] (TAO) division, which has been active since at least circa 1998.
According to the Foreign Policy magazine, "... the Office of Tailored Access Operations, or TAO, has successfully penetrated Chinese computer and telecommunications systems for almost 15 years, generating some of the best and most reliable intelligence information about what is going on inside the People's Republic of China."
In an interview with Wired magazine, Edward Snowden said the Tailored Access Operations division accidentally caused Syria's internet blackout in 2012.
Organizational structure
The NSA is led by the Director of the National Security Agency (DIRNSA), who also serves as Chief of the Central Security Service (CHCSS) and Commander of the United States Cyber Command (USCYBERCOM) and is the highest-ranking military official of these organizations. He is assisted by a Deputy Director, who is the highest-ranking civilian within the NSA/CSS.
NSA also has an Inspector General, head of the Office of the Inspector General (OIG), a General Counsel, head of the Office of the General Counsel (OGC) and a Director of Compliance, who is head of the Office of the Director of Compliance (ODOC).
Unlike other intelligence organizations such as the CIA or DIA, NSA has always been particularly reticent concerning its internal organizational structure.
As of the mid-1990s, the National Security Agency was organized into five Directorates:
The Operations Directorate, which was responsible for SIGINT collection and processing.
The Technology and Systems Directorate, which develops new technologies for SIGINT collection and processing.
The Information Systems Security Directorate, which was responsible for NSA's communications and information security missions.
The Plans, Policy and Programs Directorate, which provided staff support and general direction for the Agency.
The Support Services Directorate, which provided logistical and administrative support activities.
Each of these directorates consisted of several groups or elements, designated by a letter. There were for example the A Group, which was responsible for all SIGINT operations against the Soviet Union and Eastern Europe, and G Group, which was responsible for SIGINT related to all non-communist countries. These groups were divided into units designated by an additional number, like unit A5 for breaking Soviet codes, and G6, being the office for the Middle East, North Africa, Cuba, Central and South America.
Directorates
, NSA has about a dozen directorates, which are designated by a letter, although not all of them are publicly known.
In the year 2000, a leadership team was formed, consisting of the Director, the Deputy Director and the Directors of the Signals Intelligence (SID), the Information Assurance (IAD) and the Technical Directorate (TD). The chiefs of other main NSA divisions became associate directors of the senior leadership team.
After president George W. Bush initiated the President's Surveillance Program (PSP) in 2001, the NSA created a 24-hour Metadata Analysis Center (MAC), followed in 2004 by the Advanced Analysis Division (AAD), with the mission of analyzing content, Internet metadata and telephone metadata. Both units were part of the Signals Intelligence Directorate.
A 2016 proposal would combine the Signals Intelligence Directorate with Information Assurance Directorate into Directorate of Operations.
NSANet
NSANet stands for National Security Agency Network and is the official NSA intranet. It is a classified network, for information up to the level of TS/SCI to support the use and sharing of intelligence data between NSA and the signals intelligence agencies of the four other nations of the Five Eyes partnership. The management of NSANet has been delegated to the Central Security Service Texas (CSSTEXAS).
NSANet is a highly secured computer network consisting of fiber-optic and satellite communication channels which are almost completely separated from the public Internet. The network allows NSA personnel and civilian and military intelligence analysts anywhere in the world to have access to the agency's systems and databases. This access is tightly controlled and monitored. For example, every keystroke is logged, activities are audited at random and downloading and printing of documents from NSANet are recorded.
In 1998, NSANet, along with NIPRNET and SIPRNET, had "significant problems with poor search capabilities, unorganized data and old information". In 2004, the network was reported to have used over twenty commercial off-the-shelf operating systems. Some universities that do highly sensitive research are allowed to connect to it.
The thousands of Top Secret internal NSA documents that were taken by Edward Snowden in 2013 were stored in "a file-sharing location on the NSA's intranet site"; so, they could easily be read online by NSA personnel. Everyone with a TS/SCI-clearance had access to these documents. As a system administrator, Snowden was responsible for moving accidentally misplaced highly sensitive documents to safer storage locations.
Watch centers
The NSA maintains at least two watch centers:
National Security Operations Center (NSOC), which is the NSA's current operations center and focal point for time-sensitive SIGINT reporting for the United States SIGINT System (USSS). This center was established in 1968 as the National SIGINT Watch Center (NSWC) and renamed into National SIGINT Operations Center (NSOC) in 1973. This "nerve center of the NSA" got its current name in 1996.
NSA/CSS Threat Operations Center (NTOC), which is the primary NSA/CSS partner for Department of Homeland Security response to cyber incidents. The NTOC establishes real-time network awareness and threat characterization capabilities to forecast, alert, and attribute malicious activity and enable the coordination of Computer Network Operations. The NTOC was established in 2004 as a joint Information Assurance and Signals Intelligence project.
NSA Police
The NSA has its own police force, known as NSA Police (and formerly as NSA Security Protective Force) which provides law enforcement services, emergency response and physical security to the NSA's people and property.
NSA Police are armed federal officers. NSA Police have use of a K9 division, which generally conducts explosive detection screening of mail, vehicles and cargo entering NSA grounds.
NSA Police use marked vehicles to carry out patrols.
Employees
The number of NSA employees is officially classified but there are several sources providing estimates.
In 1961, NSA had 59,000 military and civilian employees, which grew to 93,067 in 1969, of which 19,300 worked at the headquarters at Fort Meade. In the early 1980s, NSA had roughly 50,000 military and civilian personnel. By 1989 this number had grown again to 75,000, of which 25,000 worked at the NSA headquarters. Between 1990 and 1995 the NSA's budget and workforce were cut by one third, which led to a substantial loss of experience.
In 2012, the NSA said more than 30,000 employees worked at Fort Meade and other facilities. In 2012, John C. Inglis, the deputy director, said that the total number of NSA employees is "somewhere between 37,000 and one billion" as a joke, and stated that the agency is "probably the biggest employer of introverts." In 2013 Der Spiegel stated that the NSA had 40,000 employees. More widely, it has been described as the world's largest single employer of mathematicians. Some NSA employees form part of the workforce of the National Reconnaissance Office (NRO), the agency that provides the NSA with satellite signals intelligence.
As of 2013 about 1,000 system administrators work for the NSA.
Personnel security
The NSA received criticism early on in 1960 after two agents had defected to the Soviet Union. Investigations by the House Un-American Activities Committee and a special subcommittee of the United States House Committee on Armed Services revealed severe cases of ignorance in personnel security regulations, prompting the former personnel director and the director of security to step down and leading to the adoption of stricter security practices. Nonetheless, security breaches reoccurred only a year later when in an issue of Izvestia of July 23, 1963, a former NSA employee published several cryptologic secrets.
The very same day, an NSA clerk-messenger committed suicide as ongoing investigations disclosed that he had sold secret information to the Soviets on a regular basis. The reluctance of Congressional houses to look into these affairs had prompted a journalist to write, "If a similar series of tragic blunders occurred in any ordinary agency of Government an aroused public would insist that those responsible be officially censured, demoted, or fired." David Kahn criticized the NSA's tactics of concealing its doings as smug and the Congress' blind faith in the agency's right-doing as shortsighted, and pointed out the necessity of surveillance by the Congress to prevent abuse of power.
Edward Snowden's leaking of the existence of PRISM in 2013 caused the NSA to institute a "two-man rule", where two system administrators are required to be present when one accesses certain sensitive information. Snowden claims he suggested such a rule in 2009.
Polygraphing
The NSA conducts polygraph tests of employees. For new employees, the tests are meant to discover enemy spies who are applying to the NSA and to uncover any information that could make an applicant pliant to coercion. As part of the latter, historically EPQs or "embarrassing personal questions" about sexual behavior had been included in the NSA polygraph. The NSA also conducts five-year periodic reinvestigation polygraphs of employees, focusing on counterintelligence programs. In addition the NSA conducts periodic polygraph investigations in order to find spies and leakers; those who refuse to take them may receive "termination of employment", according to a 1982 memorandum from the director of the NSA.
There are also "special access examination" polygraphs for employees who wish to work in highly sensitive areas, and those polygraphs cover counterintelligence questions and some questions about behavior. NSA's brochure states that the average test length is between two and four hours. A 1983 report of the Office of Technology Assessment stated that "It appears that the NSA [National Security Agency] (and possibly CIA) use the polygraph not to determine deception or truthfulness per se, but as a technique of interrogation to encourage admissions." Sometimes applicants in the polygraph process confess to committing felonies such as murder, rape, and selling of illegal drugs. Between 1974 and 1979, of the 20,511 job applicants who took polygraph tests, 695 (3.4%) confessed to previous felony crimes; almost all of those crimes had been undetected.
In 2010 the NSA produced a video explaining its polygraph process. The video, ten minutes long, is titled "The Truth About the Polygraph" and was posted to the Web site of the Defense Security Service. Jeff Stein of The Washington Post said that the video portrays "various applicants, or actors playing them—it's not clear—describing everything bad they had heard about the test, the implication being that none of it is true." AntiPolygraph.org argues that the NSA-produced video omits some information about the polygraph process; it produced a video responding to the NSA video. George Maschke, the founder of the Web site, accused the NSA polygraph video of being "Orwellian".
A 2013 article indicated that after Edward Snowden revealed his identity in 2013, the NSA began requiring polygraphing of employees once per quarter.
Arbitrary firing
The number of exemptions from legal requirements has been criticized. When in 1964 Congress was hearing a bill giving the director of the NSA the power to fire at will any employee, The Washington Post wrote: "This is the very definition of arbitrariness. It means that an employee could be discharged and disgraced on the basis of anonymous allegations without the slightest opportunity to defend himself." Yet, the bill was accepted by an overwhelming majority. Also, every person hired to a job in the US after 2007, at any private organization, state or federal government agency, must be reported to the New Hire Registry, ostensibly to look for child support evaders, except that employees of an intelligence agency may be excluded from reporting if the director deems it necessary for national security reasons.
Facilities
Headquarters
History of headquarters
When the agency was first established, its headquarters and cryptographic center were in the Naval Security Station in Washington, D.C. The COMINT functions were located in Arlington Hall in Northern Virginia, which served as the headquarters of the U.S. Army's cryptographic operations. Because the Soviet Union had detonated a nuclear bomb and because the facilities were crowded, the federal government wanted to move several agencies, including the AFSA/NSA. A planning committee considered Fort Knox, but Fort Meade, Maryland, was ultimately chosen as NSA headquarters because it was far enough away from Washington, D.C. in case of a nuclear strike and was close enough so its employees would not have to move their families.
Construction of additional buildings began after the agency occupied buildings at Fort Meade in the late 1950s, which they soon outgrew. In 1963 the new headquarters building, nine stories tall, opened. NSA workers referred to the building as the "Headquarters Building" and since the NSA management occupied the top floor, workers used "Ninth Floor" to refer to their leaders. COMSEC remained in Washington, D.C., until its new building was completed in 1968. In September 1986, the Operations 2A and 2B buildings, both copper-shielded to prevent eavesdropping, opened with a dedication by President Ronald Reagan. The four NSA buildings became known as the "Big Four." The NSA director moved to 2B when it opened.
Headquarters for the National Security Agency is located at in Fort George G. Meade, Maryland, although it is separate from other compounds and agencies that are based within this same military installation. Fort Meade is about southwest of Baltimore, and northeast of Washington, D.C. The NSA has two dedicated exits off Baltimore–Washington Parkway. The Eastbound exit from the Parkway (heading toward Baltimore) is open to the public and provides employee access to its main campus and public access to the National Cryptology Museum. The Westbound side exit, (heading toward Washington) is labeled "NSA Employees Only". The exit may only be used by people with the proper clearances, and security vehicles parked along the road guard the entrance.
NSA is the largest employer in the state of Maryland, and two-thirds of its personnel work at Fort Meade. Built on of Fort Meade's , the site has 1,300 buildings and an estimated 18,000 parking spaces.
The main NSA headquarters and operations building is what James Bamford, author of Body of Secrets, describes as "a modern boxy structure" that appears similar to "any stylish office building." The building is covered with one-way dark glass, which is lined with copper shielding in order to prevent espionage by trapping in signals and sounds. It contains , or more than , of floor space; Bamford said that the U.S. Capitol "could easily fit inside it four times over."
The facility has over 100 watchposts, one of them being the visitor control center, a two-story area that serves as the entrance. At the entrance, a white pentagonal structure, visitor badges are issued to visitors and security clearances of employees are checked. The visitor center includes a painting of the NSA seal.
The OPS2A building, the tallest building in the NSA complex and the location of much of the agency's operations directorate, is accessible from the visitor center. Bamford described it as a "dark glass Rubik's Cube". The facility's "red corridor" houses non-security operations such as concessions and the drug store. The name refers to the "red badge" which is worn by someone without a security clearance. The NSA headquarters includes a cafeteria, a credit union, ticket counters for airlines and entertainment, a barbershop, and a bank. NSA headquarters has its own post office, fire department, and police force.
The employees at the NSA headquarters reside in various places in the Baltimore-Washington area, including Annapolis, Baltimore, and Columbia in Maryland and the District of Columbia, including the Georgetown community. The NSA maintains a shuttle service from the Odenton station of MARC to its Visitor Control Center and has done so since 2005.
Power consumption
Following a major power outage in 2000, in 2003, and in follow-ups through 2007, The Baltimore Sun reported that the NSA was at risk of electrical overload because of insufficient internal electrical infrastructure at Fort Meade to support the amount of equipment being installed. This problem was apparently recognized in the 1990s but not made a priority, and "now the agency's ability to keep its operations going is threatened."
On August 6, 2006, The Baltimore Sun reported that the NSA had completely maxed out the grid, and that Baltimore Gas & Electric (BGE, now Constellation Energy) was unable to sell them any more power. NSA decided to move some of its operations to a new satellite facility.
BGE provided NSA with 65 to 75 megawatts at Fort Meade in 2007, and expected that an increase of 10 to 15 megawatts would be needed later that year. In 2011, the NSA was Maryland's largest consumer of power. In 2007, as BGE's largest customer, NSA bought as much electricity as Annapolis, the capital city of Maryland.
One estimate put the potential for power consumption by the new Utah Data Center at 40 million per year.
Computing assets
In 1995, The Baltimore Sun reported that the NSA is the owner of the single largest group of supercomputers.
NSA held a groundbreaking ceremony at Fort Meade in May 2013 for its High Performance Computing Center 2, expected to open in 2016. Called Site M, the center has a 150 megawatt power substation, 14 administrative buildings and 10 parking garages. It cost 3.2 billion and covers . The center is and initially uses 60 megawatts of electricity.
Increments II and III are expected to be completed by 2030, and would quadruple the space, covering with 60 buildings and 40 parking garages. Defense contractors are also establishing or expanding cybersecurity facilities near the NSA and around the Washington metropolitan area.
National Computer Security Center
The DoD Computer Security Center was founded in 1981 and renamed the National Computer Security Center (NCSC) in 1985. NCSC was responsible for computer security throughout the federal government. NCSC was part of NSA, and during the late 1980s and the 1990s, NSA and NCSC published Trusted Computer System Evaluation Criteria in a six-foot high Rainbow Series of books that detailed trusted computing and network platform specifications. The Rainbow books were replaced by the Common Criteria, however, in the early 2000s.
Other U.S. facilities
As of 2012, NSA collected intelligence from four geostationary satellites. Satellite receivers were at Roaring Creek Station in Catawissa, Pennsylvania and Salt Creek Station in Arbuckle, California. It operated ten to twenty taps on U.S. telecom switches. NSA had installations in several U.S. states and from them observed intercepts from Europe, the Middle East, North Africa, Latin America, and Asia.
NSA had facilities at Friendship Annex (FANX) in Linthicum, Maryland, which is a 20 to 25-minute drive from Fort Meade; the Aerospace Data Facility at Buckley Space Force Base in Aurora, Colorado; NSA Texas in the Texas Cryptology Center at Lackland Air Force Base in San Antonio, Texas; NSA Georgia, Georgia Cryptologic Center, Fort Gordon, Augusta, Georgia; NSA Hawaii, Hawaii Cryptologic Center in Honolulu; the Multiprogram Research Facility in Oak Ridge, Tennessee, and elsewhere.
On January 6, 2011, a groundbreaking ceremony was held to begin construction on NSA's first Comprehensive National Cyber-security Initiative (CNCI) Data Center, known as the "Utah Data Center" for short. The $1.5B data center is being built at Camp Williams, Utah, located south of Salt Lake City, and will help support the agency's National Cyber-security Initiative. It is expected to be operational by September 2013. Construction of Utah Data Center finished in May 2019.
In 2009, to protect its assets and access more electricity, NSA sought to decentralize and expand its existing facilities in Fort Meade and Menwith Hill, the latter expansion expected to be completed by 2015.
The Yakima Herald-Republic cited Bamford, saying that many of NSA's bases for its Echelon program were a legacy system, using outdated, 1990s technology. In 2004, NSA closed its operations at Bad Aibling Station (Field Station 81) in Bad Aibling, Germany. In 2012, NSA began to move some of its operations at Yakima Research Station, Yakima Training Center, in Washington state to Colorado, planning to leave Yakima closed. As of 2013, NSA also intended to close operations at Sugar Grove, West Virginia.
International stations
Following the signing in 1946–1956 of the UKUSA Agreement between the United States, United Kingdom, Canada, Australia and New Zealand, who then cooperated on signals intelligence and ECHELON, NSA stations were built at GCHQ Bude in Morwenstow, United Kingdom; Geraldton, Pine Gap and Shoal Bay, Australia; Leitrim and Ottawa, Ontario, Canada; Misawa, Japan; and Waihopai and Tangimoana, New Zealand.
NSA operates RAF Menwith Hill in North Yorkshire, United Kingdom, which was, according to BBC News in 2007, the largest electronic monitoring station in the world. Planned in 1954, and opened in 1960, the base covered in 1999.
The agency's European Cryptologic Center (ECC), with 240 employees in 2011, is headquartered at a US military compound in Griesheim, near Frankfurt in Germany. A 2011 NSA report indicates that the ECC is responsible for the "largest analysis and productivity in Europe" and focuses on various priorities, including Africa, Europe, the Middle East and counterterrorism operations.
In 2013, a new Consolidated Intelligence Center, also to be used by NSA, is being built at the headquarters of the United States Army Europe in Wiesbaden, Germany. NSA's partnership with Bundesnachrichtendienst (BND), the German foreign intelligence service, was confirmed by BND president Gerhard Schindler.
Thailand
Thailand is a "3rd party partner" of the NSA along with nine other nations. These are non-English-speaking countries that have made security agreements for the exchange of SIGINT raw material and end product reports.
Thailand is the site of at least two US SIGINT collection stations. One is at the US Embassy in Bangkok, a joint NSA-CIA Special Collection Service (SCS) unit. It presumably eavesdrops on foreign embassies, governmental communications, and other targets of opportunity.
The second installation is a FORNSAT (foreign satellite interception) station in the Thai city of Khon Kaen. It is codenamed INDRA, but has also been referred to as LEMONWOOD. The station is approximately in size and consists of a large 3,700–4,600 m2 (40,000–50,000 ft2) operations building on the west side of the ops compound and four radome-enclosed parabolic antennas. Possibly two of the radome-enclosed antennas are used for SATCOM intercept and two antennas used for relaying the intercepted material back to NSA. There is also a PUSHER-type circularly-disposed antenna array (CDAA) just north of the ops compound.
NSA activated Khon Kaen in October 1979. Its mission was to eavesdrop on the radio traffic of Chinese army and air force units in southern China, especially in and around the city of Kunming in Yunnan Province. In the late 1970s, the base consisted only of a small CDAA antenna array that was remote-controlled via satellite from the NSA listening post at Kunia, Hawaii, and a small force of civilian contractors from Bendix Field Engineering Corp. whose job it was to keep the antenna array and satellite relay facilities up and running 24/7.
According to the papers of the late General William Odom, the INDRA facility was upgraded in 1986 with a new British-made PUSHER CDAA antenna as part of an overall upgrade of NSA and Thai SIGINT facilities whose objective was to spy on the neighboring communist nations of Vietnam, Laos, and Cambodia.
The base apparently fell into disrepair in the 1990s as China and Vietnam became more friendly towards the US, and by 2002 archived satellite imagery showed that the PUSHER CDAA antenna had been torn down, perhaps indicating that the base had been closed. At some point in the period since 9/11, the Khon Kaen base was reactivated and expanded to include a sizeable SATCOM intercept mission. It is likely that the NSA presence at Khon Kaen is relatively small, and that most of the work is done by civilian contractors.
Research and development
NSA has been involved in debates about public policy, both indirectly as a behind-the-scenes adviser to other departments, and directly during and after Vice Admiral Bobby Ray Inman's directorship. NSA was a major player in the debates of the 1990s regarding the export of cryptography in the United States. Restrictions on export were reduced but not eliminated in 1996.
Its secure government communications work has involved the NSA in numerous technology areas, including the design of specialized communications hardware and software, production of dedicated semiconductors (at the Ft. Meade chip fabrication plant), and advanced cryptography research. For 50 years, NSA designed and built most of its computer equipment in-house, but from the 1990s until about 2003 (when the U.S. Congress curtailed the practice), the agency contracted with the private sector in the fields of research and equipment.
Data Encryption Standard
NSA was embroiled in some controversy concerning its involvement in the creation of the Data Encryption Standard (DES), a standard and public block cipher algorithm used by the U.S. government and banking community. During the development of DES by IBM in the 1970s, NSA recommended changes to some details of the design. There was suspicion that these changes had weakened the algorithm sufficiently to enable the agency to eavesdrop if required, including speculation that a critical component—the so-called S-boxes—had been altered to insert a "backdoor" and that the reduction in key length might have made it feasible for NSA to discover DES keys using massive computing power. It has since been observed that the S-boxes in DES are particularly resilient against differential cryptanalysis, a technique which was not publicly discovered until the late 1980s but known to the IBM DES team.
Advanced Encryption Standard
The involvement of NSA in selecting a successor to Data Encryption Standard (DES), the Advanced Encryption Standard (AES), was limited to hardware performance testing (see AES competition). NSA has subsequently certified AES for protection of classified information when used in NSA-approved systems.
NSA encryption systems
The NSA is responsible for the encryption-related components in these legacy systems:
FNBDT Future Narrow Band Digital Terminal
KL-7 ADONIS off-line rotor encryption machine (post-WWII – 1980s)
KW-26 ROMULUS electronic in-line teletypewriter encryptor (1960s–1980s)
KW-37 JASON fleet broadcast encryptor (1960s–1990s)
KY-57 VINSON tactical radio voice encryptor
KG-84 Dedicated Data Encryption/Decryption
STU-III secure telephone unit, phased out by the STE
The NSA oversees encryption in the following systems that are in use today:
EKMS Electronic Key Management System
Fortezza encryption based on portable crypto token in PC Card format
SINCGARS tactical radio with cryptographically controlled frequency hopping
STE secure terminal equipment
TACLANE product line by General Dynamics C4 Systems
The NSA has specified Suite A and Suite B cryptographic algorithm suites to be used in U.S. government systems; the Suite B algorithms are a subset of those previously specified by NIST and are expected to serve for most information protection purposes, while the Suite A algorithms are secret and are intended for especially high levels of protection.
SHA
The widely used SHA-1 and SHA-2 hash functions were designed by NSA. SHA-1 is a slight modification of the weaker SHA-0 algorithm, also designed by NSA in 1993. This small modification was suggested by NSA two years later, with no justification other than the fact that it provides additional security. An attack for SHA-0 that does not apply to the revised algorithm was indeed found between 1998 and 2005 by academic cryptographers. Because of weaknesses and key length restrictions in SHA-1, NIST deprecates its use for digital signatures, and approves only the newer SHA-2 algorithms for such applications from 2013 on.
A new hash standard, SHA-3, has recently been selected through the competition concluded October 2, 2012 with the selection of Keccak as the algorithm. The process to select SHA-3 was similar to the one held in choosing the AES, but some doubts have been cast over it, since fundamental modifications have been made to Keccak in order to turn it into a standard. These changes potentially undermine the cryptanalysis performed during the competition and reduce the security levels of the algorithm.
Clipper chip
Because of concerns that widespread use of strong cryptography would hamper government use of wiretaps, NSA proposed the concept of key escrow in 1993 and introduced the Clipper chip that would offer stronger protection than DES but would allow access to encrypted data by authorized law enforcement officials. The proposal was strongly opposed and key escrow requirements ultimately went nowhere. However, NSA's Fortezza hardware-based encryption cards, created for the Clipper project, are still used within government, and NSA ultimately declassified and published the design of the Skipjack cipher used on the cards.
Dual EC DRBG random number generator cryptotrojan
NSA promoted the inclusion of a random number generator called Dual EC DRBG in the U.S. National Institute of Standards and Technology's 2007 guidelines. This led to speculation of a backdoor which would allow NSA access to data encrypted by systems using that pseudorandom number generator (PRNG).
This is now deemed to be plausible based on the fact that output of next iterations of PRNG can provably be determined if relation between two internal Elliptic Curve points is known. Both NIST and RSA are now officially recommending against the use of this PRNG.
Perfect Citizen
Perfect Citizen is a program to perform vulnerability assessment by the NSA on U.S. critical infrastructure. It was originally reported to be a program to develop a system of sensors to detect cyber attacks on critical infrastructure computer networks in both the private and public sector through a network monitoring system named Einstein. It is funded by the Comprehensive National Cybersecurity Initiative and thus far Raytheon has received a contract for up to $100 million for the initial stage.
Academic research
NSA has invested many millions of dollars in academic research under grant code prefix MDA904, resulting in over 3,000 papers NSA/CSS has, at times, attempted to restrict the publication of academic research into cryptography; for example, the Khufu and Khafre block ciphers were voluntarily withheld in response to an NSA request to do so. In response to a FOIA lawsuit, in 2013 the NSA released the 643-page research paper titled, "Untangling the Web: A Guide to Internet Research," written and compiled by NSA employees to assist other NSA workers in searching for information of interest to the agency on the public Internet.
Patents
NSA has the ability to file for a patent from the U.S. Patent and Trademark Office under gag order. Unlike normal patents, these are not revealed to the public and do not expire. However, if the Patent Office receives an application for an identical patent from a third party, they will reveal NSA's patent and officially grant it to NSA for the full term on that date.
One of NSA's published patents describes a method of geographically locating an individual computer site in an Internet-like network, based on the latency of multiple network connections. Although no public patent exists, NSA is reported to have used a similar locating technology called trilateralization that allows real-time tracking of an individual's location, including altitude from ground level, using data obtained from cellphone towers.
Insignia and memorials
The heraldic insignia of NSA consists of an eagle inside a circle, grasping a key in its talons. The eagle represents the agency's national mission. Its breast features a shield with bands of red and white, taken from the Great Seal of the United States and representing Congress. The key is taken from the emblem of Saint Peter and represents security.
When the NSA was created, the agency had no emblem and used that of the Department of Defense. The agency adopted its first of two emblems in 1963. The current NSA insignia has been in use since 1965, when then-Director, LTG Marshall S. Carter (USA) ordered the creation of a device to represent the agency.
The NSA's flag consists of the agency's seal on a light blue background.
Crews associated with NSA missions have been involved in a number of dangerous and deadly situations. The USS Liberty incident in 1967 and USS Pueblo incident in 1968 are examples of the losses endured during the Cold War.
The National Security Agency/Central Security Service Cryptologic Memorial honors and remembers the fallen personnel, both military and civilian, of these intelligence missions. It is made of black granite, and has 171 names carved into it, It is located at NSA headquarters. A tradition of declassifying the stories of the fallen was begun in 2001.
Constitutionality, legality and privacy questions regarding operations
In the United States, at least since 2001, there has been legal controversy over what signal intelligence can be used for and how much freedom the National Security Agency has to use signal intelligence. In 2015, the government made slight changes in how it uses and collects certain types of data, specifically phone records. The government was not analyzing the phone records as of early 2019. The surveillance programs were deemed unlawful in September 2020 in a court of appeals case.
Warrantless wiretaps
On December 16, 2005, The New York Times reported that, under White House pressure and with an executive order from President George W. Bush, the National Security Agency, in an attempt to thwart terrorism, had been tapping phone calls made to persons outside the country, without obtaining warrants from the United States Foreign Intelligence Surveillance Court, a secret court created for that purpose under the Foreign Intelligence Surveillance Act (FISA).
Edward Snowden
Edward Snowden was an American intelligence contractor who, in 2013, revealed the existence of secret wide-ranging information-gathering programs conducted by the National Security Agency (NSA). More specifically, Snowden released information that demonstrated how the United States government was gathering immense amounts of personal communications, emails, phone locations, web histories and more of American citizens without their knowledge. One of Snowden's primary motivators for releasing this information was fear of a surveillance state developing as a result of the infrastructure being created by the NSA. As Snowden recounts, "I believe that, at this point in history, the greatest danger to our freedom and way of life comes from the reasonable fear of omniscient State powers kept in check by nothing more than policy documents... It is not that I do not value intelligence, but that I oppose . . . omniscient, automatic, mass surveillance. . . . That seems to me a greater threat to the institutions of free society than missed intelligence reports, and unworthy of the costs.”
In March 2014, Army General Martin Dempsey, Chairman of the Joint Chiefs of Staff, told the House Armed Services Committee, "The vast majority of the documents that Snowden ... exfiltrated from our highest levels of security ... had nothing to do with exposing government oversight of domestic activities. The vast majority of those were related to our military capabilities, operations, tactics, techniques, and procedures." When asked in a May 2014 interview to quantify the number of documents Snowden stole, retired NSA director Keith Alexander said there was no accurate way of counting what he took, but Snowden may have downloaded more than a million documents.
Other surveillance
On January 17, 2006, the Center for Constitutional Rights filed a lawsuit, CCR v. Bush, against the George W. Bush Presidency. The lawsuit challenged the National Security Agency's (NSA's) surveillance of people within the U.S., including the interception of CCR emails without securing a warrant first.
In the August 2006 case ACLU v. NSA, U.S. District Court Judge Anna Diggs Taylor concluded that NSA's warrantless surveillance program was both illegal and unconstitutional. On July 6, 2007, the 6th Circuit Court of Appeals vacated the decision on the grounds that the ACLU lacked standing to bring the suit.
In September 2008, the Electronic Frontier Foundation (EFF) filed a class action lawsuit against the NSA and several high-ranking officials of the Bush administration, charging an "illegal and unconstitutional program of dragnet communications surveillance," based on documentation provided by former AT&T technician Mark Klein.
As a result of the USA Freedom Act passed by Congress in June 2015, the NSA had to shut down its bulk phone surveillance program on November 29 of the same year. The USA Freedom Act forbids the NSA to collect metadata and content of phone calls unless it has a warrant for terrorism investigation. In that case, the agency must ask the telecom companies for the record, which will only be kept for six months. The NSA's use of large telecom companies to assist it with its surveillance efforts has caused several privacy concerns.
AT&T Internet monitoring
In May 2008, Mark Klein, a former AT&T employee, alleged that his company had cooperated with NSA in installing Narus hardware to replace the FBI Carnivore program, to monitor network communications including traffic between U.S. citizens.
Data mining
NSA was reported in 2008 to use its computing capability to analyze "transactional" data that it regularly acquires from other government agencies, which gather it under their own jurisdictional authorities. As part of this effort, NSA now monitors huge volumes of records of domestic email data, web addresses from Internet searches, bank transfers, credit-card transactions, travel records, and telephone data, according to current and former intelligence officials interviewed by The Wall Street Journal. The sender, recipient, and subject line of emails can be included, but the content of the messages or of phone calls are not.
A 2013 advisory group for the Obama administration, seeking to reform NSA spying programs following the revelations of documents released by Edward J. Snowden. mentioned in 'Recommendation 30' on page 37, "...that the National Security Council staff should manage an interagency process to review on a regular basis the activities of the US Government regarding attacks that exploit a previously unknown vulnerability in a computer application." Retired cybersecurity expert Richard A. Clarke was a group member and stated on April 11, 2014, that NSA had no advance knowledge of Heartbleed.
Illegally obtained evidence
In August 2013 it was revealed that a 2005 IRS training document showed that NSA intelligence intercepts and wiretaps, both foreign and domestic, were being supplied to the Drug Enforcement Administration (DEA) and Internal Revenue Service (IRS) and were illegally used to launch criminal investigations of US citizens. Law enforcement agents were directed to conceal how the investigations began and recreate an apparently legal investigative trail by re-obtaining the same evidence by other means.
Barack Obama administration
In the months leading to April 2009, the NSA intercepted the communications of U.S. citizens, including a Congressman, although the Justice Department believed that the interception was unintentional. The Justice Department then took action to correct the issues and bring the program into compliance with existing laws. United States Attorney General Eric Holder resumed the program according to his understanding of the Foreign Intelligence Surveillance Act amendment of 2008, without explaining what had occurred.
Polls conducted in June 2013 found divided results among Americans regarding NSA's secret data collection. Rasmussen Reports found that 59% of Americans disapprove, Gallup found that 53% disapprove, and Pew found that 56% are in favor of NSA data collection.
Section 215 metadata collection
On April 25, 2013, the NSA obtained a court order requiring Verizon's Business Network Services to provide metadata on all calls in its system to the NSA "on an ongoing daily basis" for a three-month period, as reported by The Guardian on June 6, 2013. This information includes "the numbers of both parties on a call ... location data, call duration, unique identifiers, and the time and duration of all calls" but not "[t]he contents of the conversation itself". The order relies on the so-called "business records" provision of the Patriot Act.
In August 2013, following the Snowden leaks, new details about the NSA's data mining activity were revealed. Reportedly, the majority of emails into or out of the United States are captured at "selected communications links" and automatically analyzed for keywords or other "selectors". Emails that do not match are deleted.
The utility of such a massive metadata collection in preventing terrorist attacks is disputed. Many studies reveal the dragnet like system to be ineffective. One such report, released by the New America Foundation concluded that after an analysis of 225 terrorism cases, the NSA "had no discernible impact on preventing acts of terrorism."
Defenders of the program said that while metadata alone cannot provide all the information necessary to prevent an attack, it assures the ability to "connect the dots" between suspect foreign numbers and domestic numbers with a speed only the NSA's software is capable of. One benefit of this is quickly being able to determine the difference between suspicious activity and real threats. As an example, NSA director General Keith B. Alexander mentioned at the annual Cybersecurity Summit in 2013, that metadata analysis of domestic phone call records after the Boston Marathon bombing helped determine that rumors of a follow-up attack in New York were baseless.
In addition to doubts about its effectiveness, many people argue that the collection of metadata is an unconstitutional invasion of privacy. , the collection process remained legal and grounded in the ruling from Smith v. Maryland (1979). A prominent opponent of the data collection and its legality is U.S. District Judge Richard J. Leon, who issued a report in 2013 in which he stated: "I cannot imagine a more 'indiscriminate' and 'arbitrary invasion' than this systematic and high tech collection and retention of personal data on virtually every single citizen for purposes of querying and analyzing it without prior judicial approval...Surely, such a program infringes on 'that degree of privacy' that the founders enshrined in the Fourth Amendment".
As of May 7, 2015, the United States Court of Appeals for the Second Circuit ruled that the interpretation of Section 215 of the Patriot Act was wrong and that the NSA program that has been collecting Americans' phone records in bulk is illegal. It stated that Section 215 cannot be clearly interpreted to allow government to collect national phone data and, as a result, expired on June 1, 2015. This ruling "is the first time a higher-level court in the regular judicial system has reviewed the NSA phone records program." The replacement law known as the USA Freedom Act, which will enable the NSA to continue to have bulk access to citizens' metadata but with the stipulation that the data will now be stored by the companies themselves. This change will not have any effect on other Agency procedures – outside of metadata collection – which have purportedly challenged Americans' Fourth Amendment rights, including Upstream collection, a mass of techniques used by the Agency to collect and store American's data/communications directly from the Internet backbone.
Under the Upstream collection program, the NSA paid telecommunications companies hundreds of millions of dollars in order to collect data from them. While companies such as Google and Yahoo! claim that they do not provide "direct access" from their servers to the NSA unless under a court order, the NSA had access to emails, phone calls, and cellular data users. Under this new ruling, telecommunications companies maintain bulk user metadata on their servers for at least 18 months, to be provided upon request to the NSA. This ruling made the mass storage of specific phone records at NSA datacenters illegal, but it did not rule on Section 215's constitutionality.
Fourth Amendment encroachment
In a declassified document it was revealed that 17,835 phone lines were on an improperly permitted "alert list" from 2006 to 2009 in breach of compliance, which tagged these phone lines for daily monitoring. Eleven percent of these monitored phone lines met the agency's legal standard for "reasonably articulable suspicion" (RAS).
The NSA tracks the locations of hundreds of millions of cellphones per day, allowing it to map people's movements and relationships in detail. The NSA has been reported to have access to all communications made via Google, Microsoft, Facebook, Yahoo, YouTube, AOL, Skype, Apple and Paltalk, and collects hundreds of millions of contact lists from personal email and instant messaging accounts each year. It has also managed to weaken much of the encryption used on the Internet (by collaborating with, coercing or otherwise infiltrating numerous technology companies to leave "backdoors" into their systems), so that the majority of encryption is inadvertently vulnerable to different forms of attack.
Domestically, the NSA has been proven to collect and store metadata records of phone calls, including over 120 million US Verizon subscribers, as well as intercept vast amounts of communications via the internet (Upstream). The government's legal standing had been to rely on a secret interpretation of the Patriot Act whereby the entirety of US communications may be considered "relevant" to a terrorism investigation if it is expected that even a tiny minority may relate to terrorism. The NSA also supplies foreign intercepts to the DEA, IRS and other law enforcement agencies, who use these to initiate criminal investigations. Federal agents are then instructed to "recreate" the investigative trail via parallel construction.
The NSA also spies on influential Muslims to obtain information that could be used to discredit them, such as their use of pornography. The targets, both domestic and abroad, are not suspected of any crime but hold religious or political views deemed "radical" by the NSA.
According to a report in The Washington Post in July 2014, relying on information provided by Snowden, 90% of those placed under surveillance in the U.S. are ordinary Americans and are not the intended targets. The newspaper said it had examined documents including emails, text messages, and online accounts that support the claim.
Congressional oversight
The Intelligence Committees of US House and Senate exercise primary oversight over the NSA; other members of congress have been denied access to materials and information regarding the agency and it's activities. The United States Foreign Intelligence Surveillance Court, the secret court charged with regulating the NSA's activities is, according to its chief judge, incapable of investigating or verifying how often the NSA breaks even its own secret rules. It has since been reported that the NSA violated its own rules on data access thousands of times a year, many of these violations involving large-scale data interceptions. NSA officers have even used data intercepts to spy on love interests; "most of the NSA violations were self-reported, and each instance resulted in administrative action of termination."
The NSA has "generally disregarded the special rules for disseminating United States person information" by illegally sharing its intercepts with other law enforcement agencies. A March 2009 FISA Court opinion, which the court released, states that protocols restricting data queries had been "so frequently and systemically violated that it can be fairly said that this critical element of the overall ... regime has never functioned effectively." In 2011 the same court noted that the "volume and nature" of the NSA's bulk foreign Internet intercepts was "fundamentally different from what the court had been led to believe". Email contact lists (including those of US citizens) are collected at numerous foreign locations to work around the illegality of doing so on US soil.
Legal opinions on the NSA's bulk collection program have differed. In mid-December 2013, U.S. District Judge Richard Leon ruled that the "almost-Orwellian" program likely violates the Constitution, and wrote, "I cannot imagine a more 'indiscriminate' and 'arbitrary invasion' than this systematic and high-tech collection and retention of personal data on virtually every single citizen for purposes of querying and analyzing it without prior judicial approval. Surely, such a program infringes on 'that degree of privacy' that the Founders enshrined in the Fourth Amendment. Indeed, I have little doubt that the author of our Constitution, James Madison, who cautioned us to beware 'the abridgement of freedom of the people by gradual and silent encroachments by those in power,' would be aghast."
Later that month, U.S. District Judge William Pauley ruled that the NSA's collection of telephone records is legal and valuable in the fight against terrorism. In his opinion, he wrote, "a bulk telephony metadata collection program [is] a wide net that could find and isolate gossamer contacts among suspected terrorists in an ocean of seemingly disconnected data" and noted that a similar collection of data prior to 9/11 might have prevented the attack.
Official responses
At a March 2013 Senate Intelligence Committee hearing, Senator Ron Wyden asked Director of National Intelligence James Clapper, "does the NSA collect any type of data at all on millions or hundreds of millions of Americans?" Clapper replied "No, sir. ... Not wittingly. There are cases where they could inadvertently perhaps collect, but not wittingly." This statement came under scrutiny months later, in June 2013, details of the PRISM surveillance program were published, showing that "the NSA apparently can gain access to the servers of nine Internet companies for a wide range of digital data." Wyden said that Clapper had failed to give a "straight answer" in his testimony. Clapper, in response to criticism, said, "I responded in what I thought was the most truthful, or least untruthful manner." Clapper added, "There are honest differences on the semantics of what – when someone says 'collection' to me, that has a specific meaning, which may have a different meaning to him."
NSA whistle-blower Edward Snowden additionally revealed the existence of XKeyscore, a top secret NSA program that allows the agency to search vast databases of "the metadata as well as the content of emails and other internet activity, such as browser history," with capability to search by "name, telephone number, IP address, keywords, the language in which the internet activity was conducted or the type of browser used." XKeyscore "provides the technological capability, if not the legal authority, to target even US persons for extensive electronic surveillance without a warrant provided that some identifying information, such as their email or IP address, is known to the analyst."
Regarding the necessity of these NSA programs, Alexander stated on June 27, 2013, that the NSA's bulk phone and Internet intercepts had been instrumental in preventing 54 terrorist "events", including 13 in the US, and in all but one of these cases had provided the initial tip to "unravel the threat stream". On July 31 NSA Deputy Director John Inglis conceded to the Senate that these intercepts had not been vital in stopping any terrorist attacks, but were "close" to vital in identifying and convicting four San Diego men for sending US$8,930 to Al-Shabaab, a militia that conducts terrorism in Somalia.
The U.S. government has aggressively sought to dismiss and challenge Fourth Amendment cases raised against it, and has granted retroactive immunity to ISPs and telecoms participating in domestic surveillance.
The U.S. military has acknowledged blocking access to parts of The Guardian website for thousands of defense personnel across the country, and blocking the entire Guardian website for personnel stationed throughout Afghanistan, the Middle East, and South Asia.
An October 2014 United Nations report condemned mass surveillance by the United States and other countries as violating multiple international treaties and conventions that guarantee core privacy rights.
Responsibility for international ransomware attack
An exploit dubbed EternalBlue, created by the NSA, was used in the unprecedented worldwide WannaCry ransomware attack in May 2017. The exploit had been leaked online by a hacking group, The Shadow Brokers, nearly a month prior to the attack. A number of experts have pointed the finger at the NSA's non-disclosure of the underlying vulnerability, and their loss of control over the EternalBlue attack tool that exploited it. Edward Snowden said that if the NSA had "privately disclosed the flaw used to attack hospitals when they found it, not when they lost it, [the attack] might not have happened". Wikipedia co-founder, Jimmy Wales, stated that he joined "with Microsoft and the other leaders of the industry in saying this is a huge screw-up by the government ... the moment the NSA found it, they should have notified Microsoft so they could quietly issue a patch and really chivvy people along, long before it became a huge problem."
Activities of previous employees
Former employee David Evenden, who had left the NSA to work for US defense contractor Cyperpoint at a position in the United Arab Emirates, was tasked with hacking UAE neighbor Qatar in 2015 to determine if they were funding terrorist group Muslim Brotherhood. He quit the company after learning his team had hacked Qatari Sheikha Moza bint Nasser's email exchanges with Michelle Obama, just prior to her visit to Doha. Upon Everden's return to the US, he reported his experiences to the FBI. The incident highlights a growing trend of former NSA employees and contractors leaving the agency to start up their own firms, and then hiring out to countries like Turkey, Sudan and even Russia, a country involved in numerous cyberattacks against the US.
2021 Denmark-NSA collaborative surveillance
In May 2021, it was reported that Danish Defence Intelligence Service collaborated with NSA to wiretap on fellow EU members and leaders, leading to wide backlash among EU countries and demands for explanation from Danish and American governments.
See also
Notes
References
Bamford, James. Body of Secrets: Anatomy of the Ultra-Secret National Security Agency, Random House Digital, Inc., December 18, 2007. . Previously published as: Doubleday, 2001, .
Bauer, Craig P. Secret History: The Story of Cryptology (Volume 76 of Discrete Mathematics and Its Applications). CRC Press, 2013. .
Weiland, Matt and Sean Wilsey. State by State. HarperCollins, October 19, 2010. .
Further reading
Adams, Sam, War of Numbers: An Intelligence Memoir Steerforth; new edition (June 1, 1998).
Aid, Matthew, The Secret Sentry: The Untold History of the National Security Agency, 432 pages, , Bloomsbury Press (June 9, 2009).
Mandatory Declassification Review – Interagency Security Classification Appeals Panel
Bamford, James, The Puzzle Palace, Penguin Books, .
Bamford, James, The New York Times, December 25, 2005; The Agency That Could Be Big Brother.
Bamford, James, The Shadow Factory, Anchor Books, 2009, .
Radden Keefe, Patrick, Chatter: Dispatches from the Secret World of Global Eavesdropping, Random House, .
Kent, Sherman, Strategic Intelligence for American Public Policy.
Kahn, David, The Codebreakers, 1181 pp., . Look for the 1967 rather than the 1996 edition.
Laqueur, Walter, A World of secrets.
Liston, Robert A., The Pueblo Surrender: a Covert Action by the National Security Agency, .
Levy, Steven, Crypto: How the Code Rebels Beat the Government—Saving Privacy in the Digital Age, Penguin Books, .
Prados, John, The Soviet estimate: U.S. intelligence analysis & Russian military strength, hardcover, 367 pages, , Dial Press (1982).
Perro, Ralph J. "Interviewing With An Intelligence Agency (or, A Funny Thing Happened On The Way To Fort Meade)." (Archive) Federation of American Scientists. November 2003. Updated January 2004. – About the experience of a candidate of an NSA job in pre-employment screening. "Ralph J. Perro" is a pseudonym that is a reference to Ralph J. Canine (perro is Spanish for "dog", and a dog is a type of canine)
Shaker, Richard J. "The Agency That Came in from the Cold." (Archive Notices. American Mathematical Society. May/June 1992 pp. 408–411.
Tully, Andrew, The Super Spies: More Secret, More Powerful than the CIA, 1969, LC 71080912.
Church Committee, Intelligence Activities and the Rights of Americans: 1976 US Senate Report on Illegal Wiretaps and Domestic Spying by the FBI, CIA and NSA, Red and Black Publishers (May 1, 2008).
"Just what is the NSA?" (video) CNN. June 7, 2013.
"National Security Agency Releases History of Cold War Intelligence Activities." George Washington University. National Security Archive Electronic Briefing Book No. 260. Posted November 14, 2008.
External links
National Security Agency – 60 Years of Defending Our Nation
Records of the National Security Agency/Central Security Service
The National Security Archive at George Washington University
National Security Agency (NSA) Archive on the Internet Archive
1952 establishments in the United States
Articles containing video clips
Computer security organizations
Government agencies established in 1952
Mass surveillance
Signals intelligence agencies
Supercomputer sites
United States Department of Defense agencies
United States government secrecy
Intelligence analysis agencies |
22210 | https://en.wikipedia.org/wiki/One-time%20pad | One-time pad | In cryptography, the one-time pad (OTP) is an encryption technique that cannot be cracked, but requires the use of a single-use pre-shared key that is no smaller than the message being sent. In this technique, a plaintext is paired with a random secret key (also referred to as a one-time pad). Then, each bit or character of the plaintext is encrypted by combining it with the corresponding bit or character from the pad using modular addition.
The resulting ciphertext will be impossible to decrypt or break if the following four conditions are met:
The key must be at least as long as the plaintext.
The key must be random (uniformly distributed in the set of all possible keys and independent of the plaintext), entirely sampled from a non-algorithmic, chaotic source such as a hardware random number generator. It is not sufficient for OTP keys to pass statistical randomness tests as such tests cannot measure entropy, and the number of bits of entropy must be at least equal to the number of bits in the plaintext. For example, using cryptographic hashes or mathematical functions (such as logarithm or square root) to generate keys from fewer bits of entropy would break the uniform distribution requirement, and therefore would not provide perfect secrecy.
The key must never be reused in whole or in part.
The key must be kept completely secret by the communicating parties.
It has also been mathematically proven that any cipher with the property of perfect secrecy must use keys with effectively the same requirements as OTP keys. Digital versions of one-time pad ciphers have been used by nations for critical diplomatic and military communication, but the problems of secure key distribution make them impractical for most applications.
First described by Frank Miller in 1882, the one-time pad was re-invented in 1917. On July 22, 1919, U.S. Patent 1,310,719 was issued to Gilbert Vernam for the XOR operation used for the encryption of a one-time pad. Derived from his Vernam cipher, the system was a cipher that combined a message with a key read from a punched tape. In its original form, Vernam's system was vulnerable because the key tape was a loop, which was reused whenever the loop made a full cycle. One-time use came later, when Joseph Mauborgne recognized that if the key tape were totally random, then cryptanalysis would be impossible.
The "pad" part of the name comes from early implementations where the key material was distributed as a pad of paper, allowing the current top sheet to be torn off and destroyed after use. For concealment the pad was sometimes so small that a powerful magnifying glass was required to use it. The KGB used pads of such size that they could fit in the palm of a hand, or in a walnut shell. To increase security, one-time pads were sometimes printed onto sheets of highly flammable nitrocellulose, so that they could easily be burned after use.
There is some ambiguity to the term "Vernam cipher" because some sources use "Vernam cipher" and "one-time pad" synonymously, while others refer to any additive stream cipher as a "Vernam cipher", including those based on a cryptographically secure pseudorandom number generator (CSPRNG).
History
Frank Miller in 1882 was the first to describe the one-time pad system for securing telegraphy.
The next one-time pad system was electrical. In 1917, Gilbert Vernam (of AT&T Corporation) invented and later patented in 1919 () a cipher based on teleprinter technology. Each character in a message was electrically combined with a character on a punched paper tape key. Joseph Mauborgne (then a captain in the U.S. Army and later chief of the Signal Corps) recognized that the character sequence on the key tape could be completely random and that, if so, cryptanalysis would be more difficult. Together they invented the first one-time tape system.
The next development was the paper pad system. Diplomats had long used codes and ciphers for confidentiality and to minimize telegraph costs. For the codes, words and phrases were converted to groups of numbers (typically 4 or 5 digits) using a dictionary-like codebook. For added security, secret numbers could be combined with (usually modular addition) each code group before transmission, with the secret numbers being changed periodically (this was called superencryption). In the early 1920s, three German cryptographers (Werner Kunze, Rudolf Schauffler, and Erich Langlotz), who were involved in breaking such systems, realized that they could never be broken if a separate randomly chosen additive number was used for every code group. They had duplicate paper pads printed with lines of random number groups. Each page had a serial number and eight lines. Each line had six 5-digit numbers. A page would be used as a work sheet to encode a message and then destroyed. The serial number of the page would be sent with the encoded message. The recipient would reverse the procedure and then destroy his copy of the page. The German foreign office put this system into operation by 1923.
A separate notion was the use of a one-time pad of letters to encode plaintext directly as in the example below. Leo Marks describes inventing such a system for the British Special Operations Executive during World War II, though he suspected at the time that it was already known in the highly compartmentalized world of cryptography, as for instance at Bletchley Park.
The final discovery was made by information theorist Claude Shannon in the 1940s who recognized and proved the theoretical significance of the one-time pad system. Shannon delivered his results in a classified report in 1945 and published them openly in 1949. At the same time, Soviet information theorist Vladimir Kotelnikov had independently proved the absolute security of the one-time pad; his results were delivered in 1941 in a report that apparently remains classified.
Example
Suppose Alice wishes to send the message hello to Bob. Assume two pads of paper containing identical random sequences of letters were somehow previously produced and securely issued to both. Alice chooses the appropriate unused page from the pad. The way to do this is normally arranged for in advance, as for instance "use the 12th sheet on 1 May", or "use the next available sheet for the next message".
The material on the selected sheet is the key for this message. Each letter from the pad will be combined in a predetermined way with one letter of the message. (It is common, but not required, to assign each letter a numerical value, e.g., a is 0, b is 1, and so on.)
In this example, the technique is to combine the key and the message using modular addition (essentially the standard Vigenère cipher). The numerical values of corresponding message and key letters are added together, modulo 26. So, if key material begins with XMCKL and the message is hello, then the coding would be done as follows:
h e l l o message
7 (h) 4 (e) 11 (l) 11 (l) 14 (o) message
+ 23 (X) 12 (M) 2 (C) 10 (K) 11 (L) key
= 30 16 13 21 25 message + key
= 4 (E) 16 (Q) 13 (N) 21 (V) 25 (Z) (message + key) mod 26
E Q N V Z → ciphertext
If a number is larger than 25, then the remainder after subtraction of 26 is taken in modular arithmetic fashion. This simply means that if the computations "go past" Z, the sequence starts again at A.
The ciphertext to be sent to Bob is thus EQNVZ. Bob uses the matching key page and the same process, but in reverse, to obtain the plaintext. Here the key is subtracted from the ciphertext, again using modular arithmetic:
E Q N V Z ciphertext
4 (E) 16 (Q) 13 (N) 21 (V) 25 (Z) ciphertext
− 23 (X) 12 (M) 2 (C) 10 (K) 11 (L) key
= −19 4 11 11 14 ciphertext – key
= 7 (h) 4 (e) 11 (l) 11 (l) 14 (o) ciphertext – key (mod 26)
h e l l o → message
Similar to the above, if a number is negative, then 26 is added to make the number zero or higher.
Thus Bob recovers Alice's plaintext, the message hello. Both Alice and Bob destroy the key sheet immediately after use, thus preventing reuse and an attack against the cipher. The KGB often issued its agents one-time pads printed on tiny sheets of flash paper, paper chemically converted to nitrocellulose, which burns almost instantly and leaves no ash.
The classical one-time pad of espionage used actual pads of minuscule, easily concealed paper, a sharp pencil, and some mental arithmetic. The method can be implemented now as a software program, using data files as input (plaintext), output (ciphertext) and key material (the required random sequence). The exclusive or (XOR) operation is often used to combine the plaintext and the key elements, and is especially attractive on computers since it is usually a native machine instruction and is therefore very fast. It is, however, difficult to ensure that the key material is actually random, is used only once, never becomes known to the opposition, and is completely destroyed after use. The auxiliary parts of a software one-time pad implementation present real challenges: secure handling/transmission of plaintext, truly random keys, and one-time-only use of the key.
Attempt at cryptanalysis
To continue the example from above, suppose Eve intercepts Alice's ciphertext: EQNVZ. If Eve had infinite time, she would find that the key XMCKL would produce the plaintext hello, but she would also find that the key TQURI would produce the plaintext later, an equally plausible message:
4 (E) 16 (Q) 13 (N) 21 (V) 25 (Z) ciphertext
− 19 (T) 16 (Q) 20 (U) 17 (R) 8 (I) possible key
= −15 0 −7 4 17 ciphertext-key
= 11 (l) 0 (a) 19 (t) 4 (e) 17 (r) ciphertext-key (mod 26)
In fact, it is possible to "decrypt" out of the ciphertext any message whatsoever with the same number of characters, simply by using a different key, and there is no information in the ciphertext that will allow Eve to choose among the various possible readings of the ciphertext.
If the key is not truly random, it is possible to use statistical analysis to determine which of the plausible keys is the "least" random and therefore more likely to be the correct one. If a key is reused, it will noticeably be the only key that produces sensible plaintexts from both ciphertexts (the chances of some random incorrect key also producing two sensible plaintexts are very slim).
Perfect secrecy
One-time pads are "information-theoretically secure" in that the encrypted message (i.e., the ciphertext) provides no information about the original message to a cryptanalyst (except the maximum possible length of the message). This is a very strong notion of security first developed during WWII by Claude Shannon and proved, mathematically, to be true for the one-time pad by Shannon at about the same time. His result was published in the Bell System Technical Journal in 1949. Properly used, one-time pads are secure in this sense even against adversaries with infinite computational power.
Claude Shannon proved, using information theoretic considerations, that the one-time pad has a property he termed perfect secrecy; that is, the ciphertext C gives absolutely no additional information about the plaintext. This is because (intuitively), given a truly uniformly random key that is used only once, a ciphertext can be translated into any plaintext of the same length, and all are equally likely. Thus, the a priori probability of a plaintext message M is the same as the a posteriori probability of a plaintext message M given the corresponding ciphertext.
Mathematically, this is expressed as , where is the information entropy of the plaintext and is the conditional entropy of the plaintext given the ciphertext C. (Here, Η is the capital Greek letter eta.) This implies that for every message M and corresponding ciphertext C, there must be at least one key K that binds them as a one-time pad. Mathematically speaking, this means must hold, where denote the quantities of possible keys, ciphers and messages, respectively. In other words, to be able to go from any plaintext in the message space M to any cipher in the cipher space C (via encryption) and from any cipher in cipher-space C to a plain text in message space M (decryption), it would require at least keys (with all keys used with equal probability of to ensure perfect secrecy).
Another way of stating perfect secrecy is that for all messages in message space M, and for all ciphers c in cipher space C, we have , where represents the probabilities, taken over a choice of in key space over the coin tosses of a probabilistic algorithm, . Perfect secrecy is a strong notion of cryptanalytic difficulty.
Conventional symmetric encryption algorithms use complex patterns of substitution and transpositions. For the best of these currently in use, it is not known whether there can be a cryptanalytic procedure that can efficiently reverse (or even partially reverse) these transformations without knowing the key used during encryption. Asymmetric encryption algorithms depend on mathematical problems that are thought to be difficult to solve, such as integer factorization or the discrete logarithm. However, there is no proof that these problems are hard, and a mathematical breakthrough could make existing systems vulnerable to attack.
Given perfect secrecy, in contrast to conventional symmetric encryption, the one-time pad is immune even to brute-force attacks. Trying all keys simply yields all plaintexts, all equally likely to be the actual plaintext. Even with a partially known plaintext, brute-force attacks cannot be used, since an attacker is unable to gain any information about the parts of the key needed to decrypt the rest of the message. The parts of the plaintext that are known will reveal only the parts of the key corresponding to them, and they correspond on a strictly one-to-one basis; a uniformly random key's bits will be independent.
Quantum computers have been shown by Peter Shor and others to be much faster at solving some problems that the security of traditional asymmetric encryption algorithms depends on. The cryptographic algorithms that depend on these problem's difficulty would be rendered obsolete with a powerful enough quantum computer. One-time pads, however, would remain secure, as perfect secrecy does not depend on assumptions about the computational resources of an attacker. Quantum cryptography and post-quantum cryptography involve studying the impact of quantum computers on information security.
Problems
Despite Shannon's proof of its security, the one-time pad has serious drawbacks in practice because it requires:
Truly random, as opposed to pseudorandom, one-time pad values, which is a non-trivial requirement. Random number generation in computers is often difficult, and pseudorandom number generators are often used for their speed and usefulness for most applications. True random number generators exist, but are typically slower and more specialized.
Secure generation and exchange of the one-time pad values, which must be at least as long as the message. This is important because the security of the one-time pad depends on the security of the one-time pad exchange. If an attacker is able to intercept the one-time pad value, they can decrypt messages sent using the one-time pad.
Careful treatment to make sure that the one-time pad values continue to remain secret and are disposed of correctly, preventing any reuse (partially or entirely) —hence "one-time". Problems with data remanence can make it difficult to completely erase computer media.
One-time pads solve few current practical problems in cryptography. High quality ciphers are widely available and their security is not currently considered a major worry. Such ciphers are almost always easier to employ than one-time pads because the amount of key material that must be properly and securely generated, distributed and stored is far smaller. Additionally, public key cryptography overcomes the problem of key distribution.
True randomness
High-quality random numbers are difficult to generate. The random number generation functions in most programming language libraries are not suitable for cryptographic use. Even those generators that are suitable for normal cryptographic use, including /dev/random and many hardware random number generators, may make some use of cryptographic functions whose security has not been proven. An example of a technique for generating pure randomness is measuring radioactive emissions.
In particular, one-time use is absolutely necessary. If a one-time pad is used just twice, simple mathematical operations can reduce it to a running key cipher. For example, if and represent two distinct plaintext messages and they are each encrypted by a common key , then the respective ciphertexts are given by:
where means XOR. If an attacker were to have both ciphertexts and , then simply taking the XOR of and yields the XOR of the two plaintexts . (This is because taking the XOR of the common key with itself yields a constant bitstream of zeros.) is then the equivalent of a running key cipher.
If both plaintexts are in a natural language (e.g., English or Russian), each stands a very high chance of being recovered by heuristic cryptanalysis, with possibly a few ambiguities. Of course, a longer message can only be broken for the portion that overlaps a shorter message, plus perhaps a little more by completing a word or phrase. The most famous exploit of this vulnerability occurred with the Venona project.
Key distribution
Because the pad, like all shared secrets, must be passed and kept secure, and the pad has to be at least as long as the message, there is often no point in using one-time padding, as one can simply send the plain text instead of the pad (as both can be the same size and have to be sent securely). However, once a very long pad has been securely sent (e.g., a computer disk full of random data), it can be used for numerous future messages, until the sum of the message's sizes equals the size of the pad. Quantum key distribution also proposes a solution to this problem, assuming fault-tolerant quantum computers.
Distributing very long one-time pad keys is inconvenient and usually poses a significant security risk. The pad is essentially the encryption key, but unlike keys for modern ciphers, it must be extremely long and is far too difficult for humans to remember. Storage media such as thumb drives, DVD-Rs or personal digital audio players can be used to carry a very large one-time-pad from place to place in a non-suspicious way, but the need to transport the pad physically is a burden compared to the key negotiation protocols of a modern public-key cryptosystem. Such media cannot reliably be erased securely by any means short of physical destruction (e.g., incineration). A 4.7 GB DVD-R full of one-time-pad data, if shredded into particles in size, leaves over 4 megabits of data on each particle. In addition, the risk of compromise during transit (for example, a pickpocket swiping, copying and replacing the pad) is likely to be much greater in practice than the likelihood of compromise for a cipher such as AES. Finally, the effort needed to manage one-time pad key material scales very badly for large networks of communicants—the number of pads required goes up as the square of the number of users freely exchanging messages. For communication between only two persons, or a star network topology, this is less of a problem.
The key material must be securely disposed of after use, to ensure the key material is never reused and to protect the messages sent. Because the key material must be transported from one endpoint to another, and persist until the message is sent or received, it can be more vulnerable to forensic recovery than the transient plaintext it protects (because of possible data remanence).
Authentication
As traditionally used, one-time pads provide no message authentication, the lack of which can pose a security threat in real-world systems. For example, an attacker who knows that the message contains "meet jane and me tomorrow at three thirty pm" can derive the corresponding codes of the pad directly from the two known elements (the encrypted text and the known plaintext). The attacker can then replace that text by any other text of exactly the same length, such as "three thirty meeting is canceled, stay home". The attacker's knowledge of the one-time pad is limited to this byte length, which must be maintained for any other content of the message to remain valid. This is different from malleability where the plaintext is not necessarily known. Without knowing the message, the attacker can also flip bits in a message sent with a one-time pad, without the recipient being able to detect it. Because of their similarities, attacks on one-time pads are similar to attacks on stream ciphers.
Standard techniques to prevent this, such as the use of a message authentication code can be used along with a one-time pad system to prevent such attacks, as can classical methods such as variable length padding and Russian copulation, but they all lack the perfect security the OTP itself has. Universal hashing provides a way to authenticate messages up to an arbitrary security bound (i.e., for any , a large enough hash ensures that even a computationally unbounded attacker's likelihood of successful forgery is less than p), but this uses additional random data from the pad, and some of these techniques remove the possibility of implementing the system without a computer.
Common implementation errors
Due to its relative simplicity of implementation, and due to its promise of perfect secrecy, one-time-pad enjoys high popularity among students learning about cryptography, especially as it is often the first algorithm to be presented and implemented during a course. Such "first" implementations often break the requirements for information theoretical security in one or more ways:
The pad is generated via some algorithm, that expands one or more small values into a longer "one-time-pad". This applies equally to all algorithms, from insecure basic mathematical operations like square root decimal expansions, to complex, cryptographically secure pseudo-random random number generators (CSPRNGs). None of these implementations are one-time-pads, but stream ciphers by definition. All one-time pads must be generated by a non-algorithmic process, e.g. by a hardware random number generator.
The pad is exchanged using non-information-theoretically secure methods. If the one-time-pad is encrypted with a non-information theoretically secure algorithm for delivery, the security of the cryptosystem is only as secure as the insecure delivery mechanism. A common flawed delivery mechanism for one-time-pad is a standard hybrid cryptosystem that relies on symmetric key cryptography for pad encryption, and asymmetric cryptography for symmetric key delivery. Common secure methods for one-time pad delivery are quantum key distribution, a sneakernet or courier service, or a dead drop.
The implementation does not feature an unconditionally secure authentication mechanism such as a One-time MAC.
The pad is reused (exploited during the Venona project, for example).
The pad is not destroyed immediately after use.
Uses
Applicability
Despite its problems, the one-time-pad retains some practical interest. In some hypothetical espionage situations, the one-time pad might be useful because encryption and decryption can b computed by hand with only pencil and paper. Nearly all other high quality ciphers are entirely impractical without computers. In the modern world, however, computers (such as those embedded in mobile phones) are so ubiquitous that possessing a computer suitable for performing conventional encryption (for example, a phone that can run concealed cryptographic software) will usually not attract suspicion.
The one-time-pad is the optimum cryptosystem with theoretically perfect secrecy.
The one-time-pad is one of the most practical methods of encryption where one or both parties must do all work by hand, without the aid of a computer. This made it important in the pre-computer era, and it could conceivably still be useful in situations where possession of a computer is illegal or incriminating or where trustworthy computers are not available.
One-time pads are practical in situations where two parties in a secure environment must be able to depart from one another and communicate from two separate secure environments with perfect secrecy.
The one-time-pad can be used in superencryption.
The algorithm most commonly associated with quantum key distribution is the one-time pad.
The one-time pad is mimicked by stream ciphers.
Numbers stations often send messages encrypted with a one-time pad.
Historical uses
One-time pads have been used in special circumstances since the early 1900s. In 1923, they were employed for diplomatic communications by the German diplomatic establishment. The Weimar Republic Diplomatic Service began using the method in about 1920. The breaking of poor Soviet cryptography by the British, with messages made public for political reasons in two instances in the 1920s (ARCOS case), appear to have caused the Soviet Union to adopt one-time pads for some purposes by around 1930. KGB spies are also known to have used pencil and paper one-time pads more recently. Examples include Colonel Rudolf Abel, who was arrested and convicted in New York City in the 1950s, and the 'Krogers' (i.e., Morris and Lona Cohen), who were arrested and convicted of espionage in the United Kingdom in the early 1960s. Both were found with physical one-time pads in their possession.
A number of nations have used one-time pad systems for their sensitive traffic. Leo Marks reports that the British Special Operations Executive used one-time pads in World War II to encode traffic between its offices. One-time pads for use with its overseas agents were introduced late in the war. A few British one-time tape cipher machines include the Rockex and Noreen. The German Stasi Sprach Machine was also capable of using one time tape that East Germany, Russia, and even Cuba used to send encrypted messages to their agents.
The World War II voice scrambler SIGSALY was also a form of one-time system. It added noise to the signal at one end and removed it at the other end. The noise was distributed to the channel ends in the form of large shellac records that were manufactured in unique pairs. There were both starting synchronization and longer-term phase drift problems that arose and had to be solved before the system could be used.
The hotline between Moscow and Washington D.C., established in 1963 after the 1962 Cuban Missile Crisis, used teleprinters protected by a commercial one-time tape system. Each country prepared the keying tapes used to encode its messages and delivered them via their embassy in the other country. A unique advantage of the OTP in this case was that neither country had to reveal more sensitive encryption methods to the other.
U.S. Army Special Forces used one-time pads in Vietnam. By using Morse code with one-time pads and continuous wave radio transmission (the carrier for Morse code), they achieved both secrecy and reliable communications.
During the 1983 Invasion of Grenada, U.S. forces found a supply of pairs of one-time pad books in a Cuban warehouse.
Starting in 1988, the African National Congress (ANC) used disk-based one-time pads as part of a secure communication system between ANC leaders outside South Africa and in-country operatives as part of Operation Vula, a successful effort to build a resistance network inside South Africa. Random numbers on the disk were erased after use. A Belgian airline stewardess acted as courier to bring in the pad disks. A regular resupply of new disks was needed as they were used up fairly quickly. One problem with the system was that it could not be used for secure data storage. Later Vula added a stream cipher keyed by book codes to solve this problem.
A related notion is the one-time code—a signal, used only once; e.g., "Alpha" for "mission completed", "Bravo" for "mission failed" or even "Torch" for "Allied invasion of French Northern Africa" cannot be "decrypted" in any reasonable sense of the word. Understanding the message will require additional information, often 'depth' of repetition, or some traffic analysis. However, such strategies (though often used by real operatives, and baseball coaches) are not a cryptographic one-time pad in any significant sense.
NSA
At least into the 1970s, the U.S. National Security Agency (NSA) produced a variety of manual one-time pads, both general purpose and specialized, with 86,000 one-time pads produced in fiscal year 1972. Special purpose pads were produced for what NSA called "pro forma" systems, where “the basic framework, form or format of every message text is identical or nearly so; the same kind of information, message after message, is to be presented in the same order, and only specific values, like numbers, change with each message.” Examples included nuclear launch messages and radio direction finding reports (COMUS).
General purpose pads were produced in several formats, a simple list of random letters (DIANA) or just numbers (CALYPSO), tiny pads for covert agents (MICKEY MOUSE), and pads designed for more rapid encoding of short messages, at the cost of lower density. One example, ORION, had 50 rows of plaintext alphabets on one side and the corresponding random cipher text letters on the other side. By placing a sheet on top of a piece of carbon paper with the carbon face up, one could circle one letter in each row on one side and the corresponding letter on the other side would be circled by the carbon paper. Thus one ORION sheet could quickly encode or decode a message up to 50 characters long. Production of ORION pads required printing both sides in exact registration, a difficult process, so NSA switched to another pad format, MEDEA, with 25 rows of paired alphabets and random characters. (See Commons:Category:NSA one-time pads for illustrations.)
The NSA also built automated systems for the "centralized headquarters of CIA and Special Forces units so that they can efficiently process the many separate one-time pad messages to and from individual pad holders in the field".
During World War II and into the 1950s, the U.S. made extensive use of one-time tape systems. In addition to providing confidentiality, circuits secured by one-time tape ran continually, even when there was no traffic, thus protecting against traffic analysis. In 1955, NSA produced some 1,660,000 rolls of one time tape. Each roll was 8 inches in diameter, contained 100,000 characters, lasted 166 minutes and cost $4.55 to produce. By 1972, only 55,000 rolls were produced, as one-time tapes were replaced by rotor machines such as SIGTOT, and later by electronic devices based on shift registers. The NSA describes one-time tape systems like 5-UCO and SIGTOT as being used for intelligence traffic until the introduction of the electronic cipher based KW-26 in 1957.
Exploits
While one-time pads provide perfect secrecy if generated and used properly, small mistakes can lead to successful cryptanalysis:
In 1944–1945, the U.S. Army's Signals Intelligence Service was able to solve a one-time pad system used by the German Foreign Office for its high-level traffic, codenamed GEE. GEE was insecure because the pads were not sufficiently random—the machine used to generate the pads produced predictable output.
In 1945, the US discovered that Canberra–Moscow messages were being encrypted first using a code-book and then using a one-time pad. However, the one-time pad used was the same one used by Moscow for Washington, D.C.–Moscow messages. Combined with the fact that some of the Canberra–Moscow messages included known British government documents, this allowed some of the encrypted messages to be broken.
One-time pads were employed by Soviet espionage agencies for covert communications with agents and agent controllers. Analysis has shown that these pads were generated by typists using actual typewriters. This method is not truly random, as it makes the pads more likely to contain certain convenient key sequences more frequently. This proved to be generally effective because the pads were still somewhat unpredictable because the typists were not following rules, and different typists produced different patterns of pads. Without copies of the key material used, only some defect in the generation method or reuse of keys offered much hope of cryptanalysis. Beginning in the late 1940s, US and UK intelligence agencies were able to break some of the Soviet one-time pad traffic to Moscow during WWII as a result of errors made in generating and distributing the key material. One suggestion is that Moscow Centre personnel were somewhat rushed by the presence of German troops just outside Moscow in late 1941 and early 1942, and they produced more than one copy of the same key material during that period. This decades-long effort was finally codenamed VENONA (BRIDE had been an earlier name); it produced a considerable amount of information. Even so, only a small percentage of the intercepted messages were either fully or partially decrypted (a few thousand out of several hundred thousand).
The one-time tape systems used by the U.S. employed electromechanical mixers to combine bits from the message and the one-time tape. These mixers radiated considerable electromagnetic energy that could be picked up by an adversary at some distance from the encryption equipment. This effect, first noticed by Bell Labs during World War II, could allow interception and recovery of the plaintext of messages being transmitted, a vulnerability code-named Tempest.
See also
Agrippa (A Book of the Dead)
Information theoretic security
Numbers station
One-time password
Session key
Steganography
Tradecraft
Unicity distance
Notes
References
Further reading
External links
Detailed description and history of One-time Pad with examples and images on Cipher Machines and Cryptology
The FreeS/WAN glossary entry with a discussion of OTP weaknesses
Information-theoretically secure algorithms
Stream ciphers
Cryptography
1882 introductions |
22676 | https://en.wikipedia.org/wiki/Oxfordian%20theory%20of%20Shakespeare%20authorship | Oxfordian theory of Shakespeare authorship | The Oxfordian theory of Shakespeare authorship contends that Edward de Vere, 17th Earl of Oxford, wrote the plays and poems traditionally attributed to William Shakespeare. Though literary scholars reject all alternative authorship candidates, including Oxford, interest in the Oxfordian theory continues. Since the 1920s, the Oxfordian theory has been the most popular alternative Shakespeare authorship theory.
The convergence of documentary evidence of the type used by academics for authorial attribution – title pages, testimony by other contemporary poets and historians, and official records – sufficiently establishes Shakespeare's authorship for the overwhelming majority of Shakespeare scholars and literary historians, and no such documentary evidence links Oxford to Shakespeare's works. Oxfordians, however, reject the historical record and claim that circumstantial evidence supports Oxford’s authorship, proposing that the contradictory historical evidence is part of a conspiracy theory that falsified the record to protect the identity of the real author. Scholarly literary specialists consider the Oxfordian method of interpreting the plays and poems as autobiographical, and then using them to construct a hypothetical author's biography, as unreliable and logically unsound.
Oxfordian arguments rely heavily on biographical allusions; adherents find correspondences between incidents and circumstances in Oxford's life and events in Shakespeare's plays, sonnets, and longer poems. The case also relies on perceived parallels of language, idiom, and thought between Shakespeare's works and Oxford's own poetry and letters. Oxfordians claim that marked passages in Oxford's Bible can be linked to Biblical allusions in Shakespeare's plays. That no plays survive under Oxford's name is also important to the Oxfordian theory. Oxfordians interpret certain 16th- and 17th-century literary allusions as indicating that Oxford was one of the more prominent suppressed anonymous and/or pseudonymous writers of the day. Under this scenario, Shakespeare was either a "front man" or "play-broker" who published the plays under his own name or was merely an actor with a similar name, misidentified as the playwright since the first Shakespeare biographies of the early 1700s.
The most compelling evidence against the Oxfordian theory is de Vere's death in 1604, since the generally accepted chronology of Shakespeare's plays places the composition of approximately twelve of the plays after that date. Oxfordians respond that the annual publication of "new" or "corrected" Shakespeare plays stopped in 1604, and that the dedication to Shakespeare's Sonnets implies that the author was dead prior to their publication in 1609. Oxfordians believe the reason so many of the "late plays" show evidence of revision and collaboration is because they were completed by other playwrights after Oxford's death.
History of the Oxfordian theory
The theory that the works of Shakespeare were in fact written by someone other than William Shakespeare dates back to the mid-nineteenth century. In 1857, the first book on the topic, The Philosophy of the Plays of Shakspere Unfolded, by Delia Bacon, was published. Bacon proposed the first "group theory" of Shakespearian authorship, attributing the works to a committee headed by Francis Bacon and including Walter Raleigh. De Vere is mentioned once in the book, in a list of "high-born wits and poets", who were associated with Raleigh. Some commentators have interpreted this to imply that he was part of the group of authors. Throughout the 19th century Bacon was the preferred hidden author. Oxford is not known to have been mentioned again in this context.
By the beginning of the twentieth century other candidates, typically aristocrats, were put forward, most notably Roger Manners, 5th Earl of Rutland, and William Stanley, 6th Earl of Derby. Oxford's candidacy as sole author was first proposed by J. Thomas Looney in his 1920 book Shakespeare Identified in Edward de Vere, 17th Earl of Oxford. Following earlier anti-Stratfordians, Looney argued that the known facts of Shakespeare's life did not fit the personality he ascribed to the author of the plays. Like other anti-Stratfordians before him, Looney referred to the absence of records concerning Shakespeare's education, his limited experience of the world, his allegedly poor handwriting skills (evidenced in his signatures), and the "dirt and ignorance" of Stratford at the time. Shakespeare had a petty "acquisitive disposition", he said, while the plays made heroes of free-spending figures. They also portrayed middle and lower-class people negatively, while Shakespearian heroes were typically aristocratic. Looney referred to scholars who found in the plays evidence that their author was an expert in law, widely read in ancient Latin literature, and could speak French and Italian. Looney believed that even very early works such as Love's Labour's Lost implied that he was already a person of "matured powers", in his forties or fifties, with wide experience of the world. Looney considered that Oxford's personality fitted that he deduced from the plays, and also identified characters in the plays as detailed portraits of Oxford's family and personal contacts. Several characters, including Hamlet and Bertram (in All's Well that Ends Well), were, he believed, self-portraits. Adapting arguments earlier used for Rutland and Derby, Looney fitted events in the plays to episodes in Oxford's life, including his travels to France and Italy, the settings for many plays. Oxford's death in 1604 was linked to a drop-off in the publication of Shakespeare plays. Looney declared that the late play The Tempest was not written by Oxford, and that others performed or published after Oxford's death were most probably left incomplete and finished by other writers, thus explaining the apparent idiosyncrasies of style found in the late Shakespeare plays. Looney also introduced the argument that the reference to the "ever-living poet" in the 1609 dedication to Shakespeare's sonnets implied that the author was dead at the time of publication.
Sigmund Freud, the novelist Marjorie Bowen, and several 20th-century celebrities found the thesis persuasive, and Oxford soon overtook Bacon as the favoured alternative candidate to Shakespeare, though academic Shakespearians mostly ignored the subject. Looney's theory attracted a number of activist followers who published books supplementing his own and added new arguments, most notably Percy Allen, Bernard M. Ward, Louis P. Bénézet and Charles Wisner Barrell. Mainstream scholar Steven W. May has noted that Oxfordians of this period made genuine contributions to knowledge of Elizabethan history, citing "Ward's quite competent biography of the Earl" and "Charles Wisner Barrell's identification of Edward Vere, Oxford's illegitimate son by Anne Vavasour" as examples. In 1921, Sir George Greenwood, Looney, and others founded The Shakespeare Fellowship, an organization originally dedicated to the discussion and promotion of ecumenical anti-Stratfordian views, but which later became devoted to promoting Oxford as the true Shakespeare.
Decline and revival
After a period of decline of the Oxfordian theory beginning with World War II, in 1952 Dorothy and Charlton Greenwood Ogburn published the 1,300-page This Star of England, which briefly revived Oxfordism. A series of critical academic books and articles, however, held in check any appreciable growth of anti-Stratfordism and Oxfordism, most notably The Shakespeare Ciphers Examined (1957), by William and Elizebeth Friedman, The Poacher from Stratford (1958), by Frank Wadsworth, Shakespeare and His Betters (1958), by Reginald Churchill, The Shakespeare Claimants (1962), by H. N. Gibson, and Shakespeare and his Rivals: A Casebook on the Authorship Controversy (1962), by George L. McMichael and Edgar M. Glenn. By 1968 the newsletter of The Shakespeare Oxford Society reported that "the missionary or evangelical spirit of most of our members seems to be at a low ebb, dormant, or non-existent". In 1974, membership in the society stood at 80. In 1979, the publication of an analysis of the Ashbourne portrait dealt a further blow to the movement. The painting, long claimed to be one of the portraits of Shakespeare, but considered by Barrell to be an overpaint of a portrait of the Earl of Oxford, turned out to represent neither, but rather depicted Hugh Hamersley.
Charlton Ogburn, Jr., was elected president of The Shakespeare Oxford Society in 1976 and kick-started the modern revival of the Oxfordian movement by seeking publicity through moot court trials, media debates, television and later the Internet, including Wikipedia, methods which became standard for Oxfordian and anti-Stratfordian promoters because of their success in recruiting members of the lay public. He portrayed academic scholars as self-interested members of an "entrenched authority" that aimed to "outlaw and silence dissent in a supposedly free society", and proposed to counter their influence by portraying Oxford as a candidate on equal footing with Shakespeare.
In 1985 Ogburn published his 900-page The Mysterious William Shakespeare: the Myth and the Reality, with a Foreword by Pulitzer prize-winning historian David McCullough who wrote: "[T]his brilliant, powerful book is a major event for everyone who cares about Shakespeare. The scholarship is surpassing—brave, original, full of surprise... The strange, difficult, contradictory man who emerges as the real Shakespeare, Edward de Vere, 17th Earl of Oxford, is not just plausible but fascinating and wholly believable."
By framing the issue as one of fairness in the atmosphere of conspiracy that permeated America after Watergate, he used the media to circumnavigate academia and appeal directly to the public. Ogburn's efforts secured Oxford the place as the most popular alternative candidate.
Although Shakespearian experts disparaged Ogburn's methodology and his conclusions, one reviewer, Richmond Crinkley, the Folger Shakespeare Library's former director of educational programs, acknowledged the appeal of Ogburn's approach, writing that the doubts over Shakespeare, "arising early and growing rapidly", have a "simple, direct plausibility", and the dismissive attitude of established scholars only worked to encourage such doubts. Though Crinkley rejected Ogburn's thesis, calling it "less satisfactory than the unsatisfactory orthodoxy it challenges", he believed that one merit of the book lay in how it forces orthodox scholars to reexamine their concept of Shakespeare as author. Spurred by Ogburn's book, "[i]n the last decade of the twentieth century members of the Oxfordian camp gathered strength and made a fresh assault on the Shakespearean citadel, hoping finally to unseat the man from Stratford and install de Vere in his place."
The Oxfordian theory returned to public attention in anticipation of the late October 2011 release of Roland Emmerich's drama film Anonymous. Its distributor, Sony Pictures, advertised that the film "presents a compelling portrait of Edward de Vere as the true author of Shakespeare's plays", and commissioned high school and college-level lesson plans to promote the authorship question to history and literature teachers across the United States. According to Sony Pictures, "the objective for our Anonymous program, as stated in the classroom literature, is 'to encourage critical thinking by challenging students to examine the theories about the authorship of Shakespeare's works and to formulate their own opinions.' The study guide does not state that Edward de Vere is the writer of Shakespeare's work, but it does pose the authorship question which has been debated by scholars for decades".
Variant Oxfordian theories
Although most Oxfordians agree on the main arguments for Oxford, the theory has spawned schismatic variants that have not met with wide acceptance by all Oxfordians, although they have gained much attention.
Prince Tudor theory
In a letter written by Looney in 1933, he mentions that Allen and Ward were "advancing certain views respecting Oxford and Queen Eliz. which appear to me extravagant & improbable, in no way strengthen Oxford’s Shakespeare claims, and are likely to bring the whole cause into ridicule." Allen and Ward believed that they had discovered that Elizabeth and Oxford were lovers and had conceived a child. Allen developed the theory in his 1934 book Anne Cecil, Elizabeth & Oxford. He argued that the child was given the name William Hughes, who became an actor under the stage-name "William Shakespeare". He adopted the name because his father, Oxford, was already using it as a pen-name for his plays. Oxford had borrowed the name from a third Shakespeare, the man of that name from Stratford-upon-Avon, who was a law student at the time, but who was never an actor or a writer. Allen later changed his mind about Hughes and decided that the concealed child was the Earl of Southampton, the dedicatee of Shakespeare's narrative poems. This secret history, which has become known as the Prince Tudor theory, was covertly represented in Oxford's plays and poems and remained hidden until Allen and Ward's discoveries. The narrative poems and sonnets had been written by Oxford for his son. This Star of England (1952) by Charlton and Dorothy Ogburn included arguments in support of this version of the theory. Their son, Charlton Ogburn, Jr, agreed with Looney that the theory was an impediment to the Oxfordian movement and omitted all discussion about it in his own Oxfordian works.
However, the theory was revived and expanded by Elisabeth Sears in Shakespeare and the Tudor Rose (2002), and Hank Whittemore in The Monument (2005), an analysis of Shakespeare's Sonnets which interprets the poems as a poetic history of Queen Elizabeth, Oxford, and Southampton. Paul Streitz's Oxford: Son of Queen Elizabeth I (2001) advances a variation on the theory: that Oxford himself was the illegitimate son of Queen Elizabeth by her stepfather, Thomas Seymour. Oxford was thus the half-brother of his own son by the queen. Streitz also believes that the queen had children by the Earl of Leicester. These were Robert Cecil, 1st Earl of Salisbury, Robert Devereux, 2nd Earl of Essex, Mary Sidney and Elizabeth Leighton.
Attribution of other works to Oxford
As with other candidates for authorship of Shakespeare's works, Oxford's advocates have attributed numerous non-Shakespearian works to him. Looney began the process in his 1921 edition of de Vere's poetry. He suggested that de Vere was also responsible for some of the literary works credited to Arthur Golding, Anthony Munday and John Lyly. Streitz credits Oxford with the Authorized King James Version of the Bible. Two professors of linguistics have claimed that de Vere wrote not only the works of Shakespeare, but most of what is memorable in English literature during his lifetime, with such names as Edmund Spenser, Christopher Marlowe, Philip Sidney, John Lyly, George Peele, George Gascoigne, Raphael Holinshed, Robert Greene, Thomas Phaer, and Arthur Golding being among dozens of further pseudonyms of de Vere. Ramon Jiménez has credited Oxford with such plays as The True Tragedy of Richard III and Edmund Ironside.
Group theories
Group theories in which Oxford played the principal role as writer, but collaborated with others to create the Shakespeare canon, were adopted by a number of early Oxfordians. Looney himself was willing to concede that Oxford may have been assisted by his son-in-law William Stanley, 6th Earl of Derby, who perhaps wrote The Tempest. B.M. Ward also suggested that Oxford and Derby worked together. In his later writings Percy Allen argued that Oxford led a group of writers, among whom was William Shakespeare. Group theories with Oxford as the principal author or creative "master mind" were also proposed by Gilbert Standen in Shakespeare Authorship (1930), Gilbert Slater in Seven Shakespeares (1931) and Montagu William Douglas in Lord Oxford and the Shakespeare Group (1952).
Case against Oxfordian theory
Methodology of Oxfordian argument
Specialists in Elizabethan literary history object to the methodology of Oxfordian arguments. In lieu of any evidence of the type commonly used for authorship attribution, Oxfordians discard the methods used by historians and employ other types of arguments to make their case, the most common being supposed parallels between Oxford's life and Shakespeare's works.
Another is finding cryptic allusions to Oxford's supposed play writing in other literary works of the era that to them suggest that his authorship was obvious to those "in the know". David Kathman writes that their methods are subjective and devoid of any evidential value, because they use a "double standard". Their arguments are "not taken seriously by Shakespeare scholars because they consistently distort and misrepresent the historical record", "neglect to provide necessary context" and are in some cases "outright fabrication[s]". One major evidential objection to the Oxfordian theory is Edward de Vere's 1604 death, after which a number of Shakespeare's plays are generally believed to have been written. In The Shakespeare Claimants, a 1962 examination of the authorship question, H. N. Gibson concluded that "... on analysis the Oxfordian case appears to me a very weak one".
Mainstream objections
Mainstream academics have often argued that the Oxford theory is based on snobbery: that anti-Stratfordians reject the idea that the son of a mere tradesman could write the plays and poems of Shakespeare. The Shakespeare Oxford Society has responded that this claim is "a substitute for reasoned responses to Oxfordian evidence and logic" and is merely an ad hominem attack.
Mainstream critics further say that, if William Shakespeare were a fraud instead of the true author, the number of people involved in suppressing this information would have made it highly unlikely to succeed. And citing the "testimony of contemporary writers, court records and much else" supporting Shakespeare's authorship, Columbia University professor James S. Shapiro says any theory claiming that "there must have been a conspiracy to suppress the truth of de Vere's authorship" based on the idea that "the very absence of surviving evidence proves the case" is a logically fatal tautology.
Circumstantial evidence
While no documentary evidence connects Oxford (or any alternative author) to the plays of Shakespeare, Oxfordian writers, including Mark Anderson and Charlton Ogburn, say that connection is made by considerable circumstantial evidence inferred from Oxford's connections to the Elizabethan theatre and poetry scene; the participation of his family in the printing and publication of the First Folio; his relationship with the Earl of Southampton (believed by most Shakespeare scholars to have been Shakespeare's patron); as well as a number of specific incidents and circumstances of Oxford's life that Oxfordians say are depicted in the plays themselves.
Theatre connections
Oxford was noted for his literary and theatrical patronage, garnering dedications from a wide range of authors. For much of his adult life, Oxford patronised both adult and boy acting companies, as well as performances by musicians, acrobats and performing animals, and in 1583, he was a leaseholder of the first Blackfriars Theatre in London.
Family connections
Oxford was related to several literary figures. His mother, Margory Golding, was the sister of the Ovid translator Arthur Golding, and his uncle, Henry Howard, Earl of Surrey, was the inventor of the English or Shakespearian sonnet form.
The three dedicatees of Shakespeare's works (the earls of Southampton, Montgomery and Pembroke) were each proposed as husbands for the three daughters of Edward de Vere. Venus and Adonis and The Rape of Lucrece were dedicated to Southampton (whom many scholars have argued was the Fair Youth of the Sonnets), and the First Folio of Shakespeare's plays was dedicated to Montgomery (who married Susan de Vere) and Pembroke (who was once engaged to Bridget de Vere).
Oxford's Bible
In the late 1990s, Roger A. Stritmatter conducted a study of the marked passages found in Edward de Vere's Geneva Bible, which is now owned by the Folger Shakespeare Library. The Bible contains 1,028 instances of underlined words or passages and a few hand-written annotations, most of which consist of a single word or fragment. Stritmatter believes about a quarter of the marked passages appear in Shakespeare's works as either a theme, allusion, or quotation. Stritmatter grouped the marked passages into eight themes. Arguing that the themes fitted de Vere's known interests, he proceeded to link specific themes to passages in Shakespeare. Critics have doubted that any of the underlinings or annotations in the Bible can be reliably attributed to de Vere and not the book's other owners prior to its acquisition by the Folger Shakespeare Library in 1925, as well as challenging the looseness of Stritmatter's standards for a Biblical allusion in Shakespeare's works and arguing that there is no statistical significance to the overlap.
Stratford connections
Shakespeare's native Avon and Stratford are referred to in two prefatory poems in the 1623 First Folio, one of which refers to Shakespeare as "Swan of Avon" and another to the author's "Stratford monument". Oxfordians say the first of these phrases could refer to one of Edward de Vere's manors, Bilton Hall, near the Forest of Arden, in Rugby, on the River Avon. This view was first expressed by Charles Wisner Barrell, who argued that De Vere "kept the place as a literary hideaway where he could carry on his creative work without the interference of his father-in-law, Burghley, and other distractions of Court and city life." Oxfordians also consider it significant that the nearest town to the parish of Hackney, where de Vere later lived and was buried, was also named Stratford. Mainstream scholar Irvin Matus demonstrated that Oxford sold the Bilton house in 1580, having previously rented it out, making it unlikely that Ben Jonson's 1623 poem would identify Oxford by referring to a property he once owned, but never lived in, and sold 43 years earlier. Nor is there any evidence of a monument to Oxford in Stratford, London, or anywhere else; his widow provided for the creation of one at Hackney in her 1613 will, but there is no evidence that it was ever erected.
Oxford's annuity
Oxfordians also believe that Rev. Dr. John Ward's 1662 diary entry stating that Shakespeare wrote two plays a year "and for that had an allowance so large that he spent at the rate of £1,000 a year" as a critical piece of evidence, since Queen Elizabeth I gave Oxford an annuity of exactly £1,000 beginning in 1586 that was continued until his death. Ogburn wrote that the annuity was granted "under mysterious circumstances", and Anderson suggests it was granted because of Oxford's writing patriotic plays for government propaganda. However, the documentary evidence indicates that the allowance was meant to relieve Oxford's embarrassed financial situation caused by the ruination of his estate.
Oxford's travels and the settings of Shakespeare's plays
Almost half of Shakespeare's plays are set in Italy, many of them containing details of Italian laws, customs, and culture which Oxfordians believe could only have been obtained by personal experiences in Italy, and especially in Venice. The author of The Merchant of Venice, Looney believed, "knew Italy first hand and was touched with the life and spirit of the country". This argument had earlier been used by supporters of the Earl of Rutland and the Earl of Derby as authorship candidates, both of whom had also travelled on the continent of Europe. Oxfordian William Farina refers to Shakespeare's apparent knowledge of the Jewish ghetto, Venetian architecture and laws in The Merchant of Venice, especially the city's "notorious Alien Statute". Historical documents confirm that Oxford lived in Venice, and travelled for over a year through Italy. He disliked the country, writing in a letter to Lord Burghley dated 24 September 1575, "I am glad I have seen it, and I care not ever to see it any more". Still, he remained in Italy for another six months, leaving Venice in March 1576. According to Anderson, Oxford definitely visited Venice, Padua, Milan, Genoa, Palermo, Florence, Siena and Naples, and probably passed through Messina, Mantua and Verona, all cities used as settings by Shakespeare. In testimony before the Venetian Inquisition, Edward de Vere was said to be fluent in Italian.
However, some Shakespeare scholars say that Shakespeare gets many details of Italian life wrong, including the laws and urban geography of Venice. Kenneth Gross writes that "the play itself knows nothing about the Venetian ghetto; we get no sense of a legally separate region of Venice where Shylock must dwell." Scott McCrea describes the setting as "a nonrealistic Venice" and the laws invoked by Portia as part of the "imaginary world of the play", inconsistent with actual legal practice. Charles Ross points out that Shakespeare's Alien Statute bears little resemblance to any Italian law. For later plays such as Othello, Shakespeare probably used Lewes Lewknor's 1599 English translation of Gasparo Contarini's The Commonwealth and Government of Venice for some details about Venice's laws and customs.
Shakespeare derived much of this material from John Florio, an Italian scholar living in England who was later thanked by Ben Jonson for helping him get Italian details right for his play Volpone. Kier Elam has traced Shakespeare's Italian idioms in Shrew and some of the dialogue to Florio's Second Fruits, a bilingual introduction to Italian language and culture published in 1591. Jason Lawrence believes that Shakespeare’s Italian dialogue in the play derives "almost entirely" from Florio’s First Fruits (1578). He also believes that Shakespeare became more proficient in reading the language as set out in Florio’s manuals, as evidenced by his increasing use of Florio and other Italian sources for writing the plays.
Oxford's education and knowledge of court life
In 1567 Oxford was admitted to Gray's Inn, one of the Inns of Court which Justice Shallow reminisces about in Henry IV, Part 2. Sobran observes that the Sonnets "abound not only in legal terms – more than 200 – but also in elaborate legal conceits." These terms include: allege, auditor, defects, exchequer, forfeit, heirs, impeach, lease, moiety, recompense, render, sureties, and usage. Shakespeare also uses the legal term "quietus" (final settlement) in Sonnet 134, the last Fair Youth sonnet.
Regarding Oxford's knowledge of court life, which Oxfordians believe is reflected throughout the plays, mainstream scholars say that any special knowledge of the aristocracy appearing in the plays can be more easily explained by Shakespeare's life-time of performances before nobility and royalty, and possibly, as Gibson theorises, "by visits to his patron's house, as Marlowe visited Walsingham."
Oxford's literary reputation
Oxford's lyric poetry
Some of Oxford's lyric works have survived. Steven W. May, an authority on Oxford's poetry, attributes sixteen poems definitely, and four possibly, to Oxford noting that these are probably "only a good sampling" as "both Webbe (1586) and Puttenham (1589) rank him first among the courtier poets, an eminence he probably would not have been granted, despite his reputation as a patron, by virtue of a mere handful of lyrics".
May describes Oxford as a "competent, fairly experimental poet working in the established modes of mid-century lyric verse" and his poetry as "examples of the standard varieties of mid-Elizabethan amorous lyric". In 2004, May wrote that Oxford's poetry was "one man's contribution to the rhetorical mainstream of an evolving Elizabethan poetic" and challenged readers to distinguish any of it from "the output of his mediocre mid-century contemporaries". C. S. Lewis wrote that de Vere's poetry shows "a faint talent", but is "for the most part undistinguished and verbose."
Comparisons to Shakespeare's work
In the opinion of J. Thomas Looney, as "far as forms of versification are concerned De Vere presents just that rich variety which is so noticeable in Shakespeare; and almost all the forms he employs we find reproduced in the Shakespeare work." Oxfordian Louis P. Bénézet created the "Bénézet test", a collage of lines from Shakespeare and lines he thought were representative of Oxford, challenging non-specialists to tell the difference between the two authors. May notes that Looney compared various motifs, rhetorical devices and phrases with certain Shakespeare works to find similarities he said were "the most crucial in the piecing together of the case", but that for some of those "crucial" examples Looney used six poems mistakenly attributed to Oxford that were actually written by Greene, Campion, and Greville. Bénézet also used two lines from Greene that he thought were Oxford's, while succeeding Oxfordians, including Charles Wisner Barrell, have also misattributed poems to Oxford. "This on-going confusion of Oxford's genuine verse with that of at least three other poets", writes May, "illustrates the wholesale failure of the basic Oxfordian methodology."
According to a computerised textual comparison developed by the Claremont Shakespeare Clinic, the styles of Shakespeare and Oxford were found to be "light years apart", and the odds of Oxford having written Shakespeare were reported as "lower than the odds of getting hit by lightning". Furthermore, while the First Folio shows traces of a dialect identical to Shakespeare's, the Earl of Oxford, raised in Essex, spoke an East Anglian dialect. John Shahan and Richard Whalen condemned the Claremont study, calling it "apples to oranges", and noting that the study did not compare Oxford's songs to Shakespeare's songs, did not compare a clean unconfounded sample of Oxford's poems with Shakespeare's poems, and charged that the students under Elliott and Valenza's supervision incorrectly assumed that Oxford's youthful verse was representative of his mature poetry.
Joseph Sobran's book, Alias Shakespeare, includes Oxford's known poetry in an appendix with what he considers extensive verbal parallels with the work of Shakespeare, and he argues that Oxford's poetry is comparable in quality to some of Shakespeare's early work, such as Titus Andronicus. Other Oxfordians say that de Vere's extant work is that of a young man and should be considered juvenilia, while May believes that all the evidence dates his surviving work to his early 20s and later.
Contemporary reception
Four contemporary critics praise Oxford as a poet and a playwright, three of them within his lifetime:
William Webbe's Discourse of English Poetrie (1586) surveys and criticises the early Elizabethan poets and their works. He parenthetically mentions those of Elizabeth's court, and names Oxford as "the most excellent" among them.
The Arte of English Poesie (1589), attributed to George Puttenham, includes Oxford on a list of courtier poets and prints some of his verses as exemplars of "his excellencie and wit." He also praises Oxford and Richard Edwardes as playwrights, saying that they "deserve the hyest price" for the works of "Comedy and Enterlude" that he has seen.
Francis Meres' 1598 Palladis Tamia mentions both Oxford and Shakespeare as among several playwrights who are "the best for comedy amongst us".
Henry Peacham's 1622 The Compleat Gentleman includes Oxford on a list of courtier and would-be courtier Elizabethan poets.
Mainstream scholarship characterises the extravagant praise for de Vere's poetry more as a convention of flattery than honest appreciation of literary merit. Alan Nelson, de Vere's documentary biographer, writes that "[c]ontemporary observers such as Harvey, Webbe, Puttenham and Meres clearly exaggerated Oxford's talent in deference to his rank."
Perceived allusions to Oxford as a concealed writer
Before the advent of copyright, anonymous and pseudonymous publication was a common practice in the sixteenth century publishing world, and a passage in the Arte of English Poesie (1589), an anonymously published work itself, mentions in passing that literary figures in the court who wrote "commendably well" circulated their poetry only among their friends, "as if it were a discredit for a gentleman to seem learned" (Book 1, Chapter 8). In another passage 23 chapters later, the author (probably George Puttenham) speaks of aristocratic writers who, if their writings were made public, would appear to be excellent. It is in this passage that Oxford appears on a list of poets.
According to Daniel Wright, these combined passages confirm that Oxford was one of the concealed writers in the Elizabethan court. Critics of this view argue that Oxford nor any other writer is not here identified as a concealed writer, but as the first in a list of known modern writers whose works have already been "made public", "of which number is first" Oxford, adding to the publicly acknowledged literary tradition dating back to Geoffrey Chaucer. Other critics interpret the passage to mean that the courtly writers and their works are known within courtly circles, but not to the general public. In either case, neither Oxford nor anyone else is identified as a hidden writer or one that used a pseudonym.
Oxfordians argue that at the time of the passage's composition (pre-1589), the writers referenced were not in print, and interpret Puttenham's passage (that the noblemen preferred to 'suppress' their work to avoid the discredit of appearing learned) to mean that they were 'concealed'. They cite Sir Philip Sidney, none of whose poetry was published until after his premature death, as an example. Similarly, by 1589 nothing by Greville was in print, and only one of Walter Raleigh's works had been published.
Critics point out that six of the nine poets listed had appeared in print under their own names long before 1589, including a number of Oxford's poems in printed miscellanies, and the first poem published under Oxford's name was printed in 1572, 17 years before Puttenham's book was published. Several other contemporary authors name Oxford as a poet, and Puttenham himself quotes one of Oxford's verses elsewhere in the book, referring to him by name as the author, so Oxfordians misread Puttenham.
Oxfordians also believe other texts refer to the Edward de Vere as a concealed writer. They argue that satirist John Marston's Scourge of Villanie (1598) contains further cryptic allusions to Oxford, named as "Mutius". Marston expert Arnold Davenport believes that Mutius is the bishop-poet Joseph Hall and that Marston is criticising Hall's satires.
There is a description of the figure of Oxford in The Revenge of Bussy D'Ambois, a 1613 play by George Chapman, who has been suggested as the Rival Poet of Shakespeare's Sonnets. Chapman describes Oxford as "Rare and most absolute" in form and says he was "of spirit passing great / Valiant and learn’d, and liberal as the sun". He adds that he "spoke and writ sweetly" of both learned subjects and matters of state ("public weal").
Chronology of the plays and Oxford's 1604 death
For mainstream Shakespearian scholars, the most compelling evidence against Oxford (besides the historical evidence for William Shakespeare) is his death in 1604, since the generally accepted chronology of Shakespeare's plays places the composition of approximately twelve of the plays after that date. Critics often cite The Tempest and Macbeth, for example, as having been written after 1604.
The exact dates of the composition of most of Shakespeare's plays are uncertain, although David Bevington says it is a 'virtually unanimous' opinion among teachers and scholars of Shakespeare that the canon of late plays depicts an artistic journey that extends well beyond 1604. Evidence for this includes allusions to historical events and literary sources which postdate 1604, as well as Shakespeare's adaptation of his style to accommodate Jacobean literary tastes and the changing membership of the King's Men and their different venues.
Oxfordians say that the conventional composition dates for the plays were developed by mainstream scholars to fit within Shakespeare's lifetime and that no evidence exists that any plays were written after 1604. Anderson argues that all of the Jacobean plays were written before 1604, selectively citing non-Oxfordian scholars like Alfred Harbage, Karl Elze, and Andrew Cairncross to bolster his case. Anderson notes that from 1593 through 1603, the publication of new plays appeared at the rate of two per year, and whenever an inferior or pirated text was published, it was typically followed by a genuine text described on the title page as "newly augmented" or "corrected". After the publication of the Q1 and Q2 Hamlet in 1603, no new plays were published until 1608. Anderson observes that, "After 1604, the 'newly correct[ing]' and 'augment[ing]' stops. Once again, the Shake-speare [sic] enterprise appears to have shut down".
Notable silences
Because Shakespeare lived until 1616, Oxfordians question why, if he were the author, did he not eulogise Queen Elizabeth at her death in 1603 or Henry, Prince of Wales, at his in 1612. They believe Oxford's 1604 death provides the explanation. In an age when such actions were expected, Shakespeare also failed to memorialise the coronation of James I in 1604, the marriage of Princess Elizabeth in 1612, and the investiture of Prince Charles as the new Prince of Wales in 1613.
Anderson contends that Shakespeare refers to the latest scientific discoveries and events through the end of the 16th century, but "is mute about science after de Vere’s [Oxford’s] death in 1604". He believes that the absence of any mention of the spectacular supernova of October 1604 or Kepler’s revolutionary 1609 study of planetary orbits are especially noteworthy.
The move to the Blackfriars
Professor Jonathan Bate writes that Oxfordians cannot "provide any explanation for ... technical changes attendant on the King's Men's move to the Blackfriars theatre four years after their candidate's death .... Unlike the Globe, the Blackfriars was an indoor playhouse" and so required plays with frequent breaks in order to replace the candles it used for lighting. "The plays written after Shakespeare's company began using the Blackfriars in 1608, Cymbeline and The Winter's Tale for instance, have what most ... of the earlier plays do not have: a carefully planned five-act structure". If new Shakespearian plays were being written especially for presentation at the Blackfriars' theatre after 1608, they could not have been written by Edward de Vere.
Oxfordians argue that Oxford was well acquainted with the Blackfriars Theatre, having been a leaseholder of the venue, and note that the "assumption" that Shakespeare wrote plays for the Blackfriars is not universally accepted, citing Shakespearian scholars such as A. Nicoll who said that "all available evidence is either completely negative or else runs directly counter to such a supposition" and Harley Granville-Barker, who stated "Shakespeare did not write (except for Henry V) five-act plays at any stage of his career. The five-act structure was formalized in the First Folio, and is inauthentic".
Shakespeare's late collaborations
Further, attribution studies have shown that certain plays in the canon were written by two or three hands, which Oxfordians believe is explained by these plays being either drafted earlier than conventionally believed, or simply revised/completed by others after Oxford's death. Shapiro calls this a 'nightmare' for Oxfordians, implying a 'jumble sale scenario' for his literary remains long after his death.
Identification of earlier works with Shakespeare plays
Some Oxfordians have identified titles or descriptions of lost works from Oxford's lifetime that suggest a thematic similarity to a particular Shakespearian play and asserted that they were earlier versions. For example, in 1732, the antiquarian Francis Peck published in Desiderata Curiosa a list of documents in his possession that he intended to print someday. They included "a pleasant conceit of Vere, earl of Oxford, discontented at the rising of a mean gentleman in the English court, circa 1580." Peck never published his archives, which are now lost. To Anderson, Peck's description suggests that this conceit is "arguably an early draft of Twelfth Night."
Contemporary references to Shakespeare as alive or dead
Oxfordian writers say some literary allusions imply that the playwright and poet died prior to 1609, when Shake-Speares Sonnets appeared with the epithet "our ever-living poet" in its dedication. They claim that the phrase "ever-living" rarely, if ever, referred to a living person, but instead was used to refer to the eternal soul of the deceased. Bacon, Derby, Neville, and William Shakespeare all lived well past the 1609 publication of the Sonnets.
However, Don Foster, in his study of Early Modern uses of the phrase "ever-living", argues that the phrase most frequently refers to God or other supernatural beings, suggesting that the dedication calls upon God to bless the living begetter (writer) of the sonnets. He states that the initials "W. H." were a misprint for "W. S." or "W. SH". Bate thinks it a misprint as well, but he thinks it "improbable" that the phrase refers to God and suggests that the "ever-living poet" might be "a great dead English poet who had written on the great theme of poetic immortality", such as Sir Philip Sidney or Edmund Spenser.
Joseph Sobran, in Alias Shakespeare, argued that in 1607 William Barksted, a minor poet and playwright, implies in his poem "Mirrha the Mother of Adonis" that Shakespeare was already deceased. Shakespeare scholars explain that Sobran has simply misread Barksted’s poem, the last stanza of which is a comparison of Barksted’s poem to Shakespeare’s Venus and Adonis, and has mistaken the grammar also, which makes it clear that Barksted is referring to Shakespeare’s "song" in the past tense, not Shakespeare himself. This context is obvious when the rest of the stanza is included.
Against the Oxford theory are several references to Shakespeare, later than 1604, which imply that the author was then still alive. Scholars point to a poem written circa 1620 by a student at Oxford, William Basse, that mentioned the author Shakespeare died in 1616, which is the year Shakespeare deceased and not Edward de Vere.
Dates of composition
The Two Gentlemen of Verona
Tom Veal has noted that the early play The Two Gentlemen of Verona reveals no familiarity on the playwright's part with Italy other than "a few place names and the scarcely recondite fact that the inhabitants were Roman Catholics." For example, the play's Verona is situated on a tidal river and has a duke, and none of the characters have distinctly Italian names like in the later plays. Therefore, if the play was written by Oxford, it must have been before he visited Italy in 1575. However, the play's principal source, the Spanish Diana Enamorada, would not be translated into French or English until 1578, meaning that someone basing a play on it that early could only have read it in the original Spanish, and there is no evidence that Oxford spoke this language. Furthermore, Veal argues, the only explanation for the verbal parallels with the English translation of 1582 would be that the translator saw the play performed and echoed it in his translation, which he describes as "not an impossible theory but far from a plausible one."
Hamlet
The composition date of Hamlet has been frequently disputed. Several surviving references indicate that a Hamlet-like play was well-known throughout the 1590s, well before the traditional period of composition (1599–1601). Most scholars refer to this lost early play as the Ur-Hamlet; the earliest reference is in 1589. A 1594 performance record of Hamlet appears in Philip Henslowe's diary, and Thomas Lodge wrote of it in 1596.
Oxfordian researchers believe that the play is an early version of Shakespeare's own play, and point to the fact that Shakespeare's version survives in three quite different early texts, Q1 (1603), Q2 (1604) and F (1623), suggesting the possibility that it was revised by the author over a period of many years.
Macbeth
Scholars contend that the composition date of Macbeth is one of the most overwhelming pieces of evidence against the Oxfordian position; the vast majority of critics believe the play was written in the aftermath of the Gunpowder Plot. This plot was brought to light on 5 November 1605, a year after Oxford died. In particular, scholars identify the porter's lines about "equivocation" and treason as an allusion to the trial of Henry Garnet in 1606. Oxfordians respond that the concept of "equivocation" was the subject of a 1583 tract by Queen Elizabeth's chief councillor (and Oxford's father-in-law) Lord Burghley, as well as of the 1584 Doctrine of Equivocation by the Spanish prelate Martín de Azpilcueta, which was disseminated across Europe and into England in the 1590s.
Coriolanus
Shakespearian scholar David Haley asserts that if Edward de Vere had written Coriolanus, he "must have foreseen the Midland Revolt grain riots [of 1607] reported in Coriolanus", possible topical allusions in the play that most Shakespearians accept.
The Tempest
The play that can be dated within a fourteen-month period is The Tempest. This play has long been believed to have been inspired by the 1609 wreck at Bermuda, then feared by mariners as the Isle of the Devils, of the flagship of the Virginia Company, the Sea Venture, while leading the Third Supply to relieve Jamestown in the Colony of Virginia. The Sea Venture was captained by Christopher Newport, and carried the Admiral of the company's fleet, Sir George Somers (for whom the archipelago would subsequently be named The Somers Isles). The survivors spent nine months in Bermuda before most completed the journey to Jamestown on 23 May 1610 aboard two new ships built from scratch. One of the survivors was the newly-appointed Governor, Sir Thomas Gates. Jamestown, then little more than a rudimentary fort, was found in such a poor condition, with the majority of the previous settlers dead or dying, that Gates and Somers decided to abandon the settlement and the continent, returning everyone to England. However, with the company believing all aboard the Sea Venture dead, a new governor, Baron De La Warr, had been sent with the Fourth Supply fleet, which arrived on 10 June 1610 as Jamestown was being abandoned.
De la Warr remained in Jamestown as Governor, while Gates returned to England (and Somers to Bermuda), arriving in September, 1610. The news of the survival of the Sea Venture's passengers and crew caused a great sensation in England. Two accounts were published: Sylvester Jordain's A Discovery of the Barmvdas, Otherwise Called the Ile of Divels, in October, 1610, and A True Declaration of the Estate of the Colonie in Virginia a month later. The True Reportory of the Wrack, and Redemption of Sir Thomas Gates Knight, an account by William Strachey dated 15 July 1610, returned to England with Gates in the form of a letter which was circulated privately until its eventual publication in 1625. Shakespeare had multiple contacts to the circle of people amongst whom the letter circulated, including to Strachey. The Tempest shows clear evidence that he had read and relied on Jordain and especially Strachey. The play shares premise, basic plot, and many details of the Sea Venture's wrecking and the adventures of the survivors, as well as specific details and linguistics. A detailed comparative analysis shows the Declaration to have been the primary source from which the play was drawn. This firmly dates the writing of the play to the months between Gates' return to England and 1 November 1611.
Oxfordians have dealt with this problem in several ways. Looney expelled the play from the canon, arguing that its style and the "dreary negativism" it promoted were inconsistent with Shakespeare's "essentially positivist" soul, and so could not have been written by Oxford. Later Oxfordians have generally abandoned this argument; this has made severing the connection of the play with the wreck of the Sea Venture a priority amongst Oxfordians. A variety of attacks have been directed on the links. These include attempting to cast doubt on whether the Declaration travelled back to England with Gates, whether Gates travelled back to England early enough, whether the lowly Shakespeare would have had access to the lofty circles in which the Declaration was circulated, to understating the points of similarity between the Sea Venture wreck and the accounts of it, on the one hand, and the play on the other. Oxfordians have even claimed that the writers of the first-hand accounts of the real wreck based them on The Tempest, or, at least, the same antiquated sources that Shakespeare, or rather Oxford, is imagined to have used exclusively, including Richard Eden's The Decades of the New Worlde Or West India (1555) and Desiderius Erasmus's Naufragium/The Shipwreck (1523). Alden Vaughan commented in 2008 that "[t]he argument that Shakespeare could have gotten every thematic thread, every detail of the storm, and every similarity of word and phrase from other sources stretches credulity to the limits."
Henry VIII
Oxfordians note that while the conventional dating for Henry VIII is 1610–13, the majority of 18th and 19th century scholars, including notables such as Samuel Johnson, Lewis Theobald, George Steevens, Edmond Malone, and James Halliwell-Phillipps, placed the composition of Henry VIII prior to 1604, as they believed Elizabeth's execution of Mary, Queen of Scots (the then king James I's mother) made any vigorous defence of the Tudors politically inappropriate in the England of James I. Though it is described as a new play by two witnesses in 1613, Oxfordians argue that this refers to the fact it was new on stage, having its first production in that year.
Oxfordian cryptology
Although searching Shakespeare's works for encrypted clues supposedly left by the true author is associated mainly with the Baconian theory, such arguments are often made by Oxfordians as well. Early Oxfordians found many references to Oxford's family name "Vere" in the plays and poems, in supposed puns on words such as "ever" (E. Vere). In The De Vere Code, a book by English actor Jonathan Bond, the author believes that Thomas Thorpe's 30-word dedication to the original publication of Shakespeare's Sonnets contains six simple encryptions which conclusively establish de Vere as the author of the poems. He also writes that the alleged encryptions settle the question of the identity of "the Fair Youth" as Henry Wriothesley and contain striking references to the sonnets themselves and de Vere's relationship to Sir Philip Sidney and Ben Jonson.
Similarly, a 2009 article in the Oxfordian journal Brief Chronicles noted that Francis Meres, in Palladis Tamia compares 17 named English poets to 16 named classical poets. Writing that Meres was obsessed with numerology, the authors propose that the numbers should be symmetrical, and that careful readers are meant to infer that Meres knew two of the English poets (viz., Oxford and Shakespeare) to actually be one and the same.
Parallels with the plays
Literary scholars say that the idea that an author's work must reflect his or her life is a Modernist assumption not held by Elizabethan writers, and that biographical interpretations of literature are unreliable in attributing authorship. Further, such lists of similarities between incidents in the plays and the life of an aristocrat are flawed arguments because similar lists have been drawn up for many competing candidates, such as Francis Bacon and William Stanley, 6th Earl of Derby. Harold Love writes that "The very fact that their application has produced so many rival claimants demonstrates their unreliability," and Jonathan Bate writes that the Oxfordian biographical method "is in essence no different from the cryptogram, since Shakespeare's range of characters and plots, both familial and political, is so vast that it would be possible to find in the plays 'self-portraits' of ... anybody one cares to think of."
Despite this, Oxfordians list numerous incidents in Oxford's life that they say parallel those in many of the Shakespeare plays. Most notable among these, they say, are certain similar incidents found in Oxford's biography and Hamlet, and Henry IV, Part 1, which includes a well-known robbery scene with uncanny parallels to a real-life incident involving Oxford.
Hamlet
Most Oxfordians consider Hamlet the play most easily seen as portraying Oxford's life story, though mainstream scholars say that incidents from the lives of other contemporary figures such as King James or the Earl of Essex, fit the play just as closely, if not more so.
Hamlet's father was murdered and his mother made an "o'er-hasty marriage" less than two months later. Oxfordians see a parallel with Oxford's life, as Oxford's father died at the age of 46 on 3 August 1562, although not before making a will six days earlier, and his stepmother remarried within 15 months, although exactly when is unknown.
Another frequently-cited parallel involves Hamlet's revelation in Act IV that he was earlier taken captive by pirates. On Oxford's return from Europe in 1576, he encountered a cavalry division outside of Paris that was being led by a German duke, and his ship was hijacked by pirates who robbed him and left him stripped to his shirt, and who might have murdered him had not one of them recognised him. Anderson notes that "[n]either the encounter with Fortinbras' army nor Hamlet's brush with buccaneers appears in any of the play's sources – to the puzzlement of numerous literary critics."
Polonius
Such speculation often identifies the character of Polonius as a caricature of Lord Burghley, Oxford's guardian from the age of 12.
In the First Quarto the character was not named Polonius, but Corambis. Ogburn writes that Cor ambis can be interpreted as "two-hearted" (a view not independently supported by Latinists). He says the name is a swipe "at Burghley's motto, Cor unum, via una, or 'one heart, one way.'" Scholars suggest that it derives from the Latin phrase "crambe repetita" meaning "reheated cabbage", which was expanded in Elizabethan usage to "Crambe bis posita mors est" ("twice served cabbage is deadly"), which implies "a boring old man" who spouts trite rehashed ideas. Similar variants such as "Crambo" and "Corabme" appear in Latin-English dictionaries at the time.
Bed trick
In his Memoires (1658), Francis Osborne writes of "the last great Earle of Oxford, whose Lady was brought to his bed under the notion of his Mistris, and from such a virtuous deceit she (Oxford's youngest daughter) is said to proceed" (p. 79).
Such a bed trick has been a dramatic convention since antiquity and was used more than 40 times by every major playwright in the Early Modern theatre era except for Ben Jonson. Thomas Middleton used it five times and Shakespeare and James Shirley used it four times. Shakespeare's use of it in All's Well That Ends Well and Measure for Measure followed his sources for the plays (stories by Boccaccio and Cinthio); nevertheless Oxfordians say that de Vere was drawn to these stories because they "paralleled his own", based on Osborne's anecdote.
Earls of Oxford in the histories
Oxfordians claim that flattering treatment of Oxford's ancestors in Shakespeare's history plays is evidence of his authorship. Shakespeare omitted the character of the traitorous Robert de Vere, 3rd Earl of Oxford in The Life and Death of King John, and the character of the 12th Earl of Oxford is given a much more prominent role in Henry V than his limited involvement in the actual history of the times would allow. The 12th Earl is given an even more prominent role in the non-Shakespearian play The Famous Victories of Henry the fifth. Some Oxfordians argue that this was another play written by Oxford, based on the exaggerated role it gave to the 11th Earl of Oxford.
J. Thomas Looney found John de Vere, 13th Earl of Oxford is "hardly mentioned except to be praised" in Henry VI, Part Three; the play ahistorically depicts him participating in the Battle of Tewkesbury and being captured. Oxfordians, such as Dorothy and Charlton Ogburn, believe Shakespeare created such a role for the 13th Earl because it was the easiest way Edward de Vere could have "advertised his loyalty to the Tudor Queen" and remind her of "the historic part borne by the Earls of Oxford in defeating the usurpers and restoring the Lancastrians to power". Looney also notes that in Richard III, when the future Henry VII appears, the same Earl of Oxford is "by his side; and it is Oxford who, as premier nobleman, replies first to the king's address to his followers".
Non-Oxfordian writers do not see any evidence of partiality for the de Vere family in the plays. Richard de Vere, 11th Earl of Oxford, who plays a prominent role in the anonymous The Famous Victories of Henry V, does not appear in Shakespeare's Henry V, nor is he even mentioned. In Richard III, Oxford's reply to the king noted by Looney is a mere two lines, the only lines he speaks in the play. He has a much more prominent role in the non-Shakespearian play The True Tragedy of Richard III. On these grounds the scholar Benjamin Griffin argues that the non-Shakespearian plays, the Famous Victories and True Tragedy, are the ones connected to Oxford, possibly written for Oxford's Men. Oxfordian Charlton Ogburn Jr. argues that the role of the Earls of Oxford was played down in Henry V and Richard III to maintain Oxford's nominal anonymity. This is because "It would not do to have a performance of one of his plays at Court greeted with ill-suppressed knowing chuckles."
Oxford's finances
In 1577 the Company of Cathay was formed to support Martin Frobisher's hunt for the Northwest Passage, although Frobisher and his investors quickly became distracted by reports of gold at Hall’s Island. With thoughts of an impending Canadian gold-rush and trusting in the financial advice of Michael Lok, the treasurer of the company, de Vere signed a bond for £3,000 in order to invest £1,000 and to assume £2,000 worth – about half – of Lok's personal investment in the enterprise. Oxfordians say this is similar to Antonio in The Merchant of Venice, who was indebted to Shylock for 3,000 ducats against the successful return of his vessels.
Oxfordians also note that when de Vere travelled through Venice, he borrowed 500 crowns from a Baptista Nigrone. In Padua, he borrowed from a man named Pasquino Spinola. In The Taming of the Shrew, Kate's father is described as a man "rich in crowns." He, too, is from Padua, and his name is Baptista Minola, which Oxfordians take to be a conflation of Baptista Nigrone and Pasquino Spinola.
When the character of Antipholus of Ephesus in The Comedy of Errors tells his servant to go out and buy some rope, the servant (Dromio) replies, "I buy a thousand pounds a year! I buy a rope!" (Act 4, scene 1). The meaning of Dromio’s line has not been satisfactorily explained by critics, but Oxfordians say the line is somehow connected to the fact that de Vere was given a £1,000 annuity by the Queen, later continued by King James.
Marriage and affairs
Oxfordians see Oxford's marriage to Anne Cecil, Lord Burghley's daughter, paralleled in such plays as Hamlet, Othello, Cymbeline, The Merry Wives of Windsor, All's Well That Ends Well, Measure for Measure, Much Ado About Nothing, and The Winter's Tale.
Oxford's illicit congress with Anne Vavasour resulted in an intermittent series of street battles between the Knyvet clan, led by Anne's uncle, Sir Thomas Knyvet, and Oxford’s men. As in Romeo and Juliet, this imbroglio produced three deaths and several other injuries. The feud was finally put to an end only by the intervention of the Queen.
Oxford's criminal associations
In May 1573, in a letter to Lord Burghley, two of Oxford's former employees accused three of Oxford's friends of attacking them on "the highway from Gravesend to Rochester." In Shakespeare's Henry IV, Part 1, Falstaff and three roguish friends of Prince Hal also waylay unwary travellers at Gad's Hill, which is on the highway from Gravesend to Rochester. Scott McCrea says that there is little similarity between the two events, since the crime described in the letter is unlikely to have occurred near Gad's Hill and was not a robbery, but rather an attempted shooting. Mainstream writers also say that this episode derives from an earlier anonymous play, The Famous Victories of Henry V, which was Shakespeare's source. Some Oxfordians argue that The Famous Victories was written by Oxford, based on the exaggerated role it gave to the 11th Earl of Oxford.
Parallels with the sonnets and poems
In 1609, a volume of 154 linked poems was published under the title SHAKE-SPEARES SONNETS. Oxfordians believe the title (Shake-Speares Sonnets) suggests a finality indicating that it was a completed body of work with no further sonnets expected, and consider the differences of opinion among Shakespearian scholars as to whether the Sonnets are fictional or autobiographical to be a serious problem facing orthodox scholars. Joseph Sobran questions why Shakespeare (who lived until 1616) failed to publish a corrected and authorised edition if they are fiction, as well as why they fail to match Shakespeare's life story if they are autobiographic. According to Sobran and other researchers, the themes and personal circumstances expounded by the author of the Sonnets are remarkably similar to Oxford's biography.
The Fair Youth, the Dark Lady, and the Rival Poet
The focus of the 154 sonnet series appears to narrate the author's relationships with three characters: the Fair Youth, the Dark Lady or Mistress, and the Rival Poet. Beginning with Looney, most Oxfordians (exceptions are Percy Allen and Louis Bénézet) believe that the "Fair Youth" referred to in the early sonnets refers to Henry Wriothesley, 3rd Earl of Southampton, Oxford's peer and prospective son-in-law. The Dark Lady is believed by some Oxfordians to be Anne Vavasour, Oxford's mistress who bore him a son out of wedlock. A case was made by the Oxfordian Peter R. Moore that the Rival Poet was Robert Devereux, Earl of Essex.
Sobran suggests that the so-called procreation sonnets were part of a campaign by Burghley to persuade Southampton to marry his granddaughter, Oxford's daughter Elizabeth de Vere, and says that it was more likely that Oxford would have participated in such a campaign than that Shakespeare would know the parties involved or presume to give advice to the nobility.
Oxfordians also assert that the tone of the poems is that of a nobleman addressing an equal rather than that of a poet addressing his patron. According to them, Sonnet 91 (which compares the Fair Youth's love to such treasures as high birth, wealth, and horses) implies that the author is in a position to make such comparisons, and the 'high birth' he refers to is his own.
Age and lameness
Oxford was born in 1550, and was between 40 and 53 years old when he presumably would have written the sonnets. Shakespeare was born in 1564. Even though the average life expectancy of Elizabethans was short, being between 26 and 39 was not considered old. In spite of this, age and growing older are recurring themes in the Sonnets, for example, in Sonnets 138 and 37. In his later years, Oxford described himself as "lame". On several occasions, the author of the sonnets also described himself as lame, such as in Sonnets 37 and 89.
Public disgrace
Sobran also believes "scholars have largely ignored one of the chief themes of the Sonnets: the poet's sense of disgrace ... [T]here can be no doubt that the poet is referring to something real that he expects his friends to know about; in fact, he makes clear that a wide public knows about it ... Once again the poet's situation matches Oxford's ... He has been a topic of scandal on several occasions. And his contemporaries saw the course of his life as one of decline from great wealth, honor, and promise to disgrace and ruin. This perception was underlined by enemies who accused him of every imaginable offense and perversion, charges he was apparently unable to rebut." Examples include Sonnets 29 and 112.
As early as 1576, Edward de Vere was writing about this subject in his poem Loss of Good Name, which Steven W. May described as "a defiant lyric without precedent in English Renaissance verse."
Lost fame
The poems Venus and Adonis and Lucrece, first published in 1593 and 1594 under the name "William Shakespeare", proved highly popular for several decades – with Venus and Adonis published six more times before 1616, while Lucrece required four additional printings during this same period. By 1598, they were so famous, London poet and sonneteer Richard Barnefield wrote:
Shakespeare.....
Whose Venus and whose Lucrece (sweet and chaste)
Thy name in fame's immortal Book have plac't
Live ever you, at least in Fame live ever:
Well may the Body die, but Fame dies never.
Despite such publicity, Sobran observed, "[t]he author of the Sonnets expects and hopes to be forgotten. While he is confident that his poetry will outlast marble and monument, it will immortalize his young friend, not himself. He says that his style is so distinctive and unchanging that 'every word doth almost tell my name,' implying that his name is otherwise concealed – at a time when he is publishing long poems under the name William Shakespeare. This seems to mean that he is not writing these Sonnets under that (hidden) name." Oxfordians have interpreted the phrase "every word" as a pun on the word "every", standing for "e vere" – thus telling his name. Mainstream writers respond that several sonnets literally do tell his name, containing numerous puns on the name Will[iam]; in sonnet 136 the poet directly says "thou lov'st me for my name is Will."
Based on Sonnets 81, 72, and others, Oxfordians assert that if the author expected his "name" to be "forgotten" and "buried", it would not have been the name that permanently adorned the published works themselves.
In fiction
Leslie Howard's 1943 anti-Nazi film "Pimpernel" Smith features dialogue by the protagonist endorsing the Oxfordian theory.
In the afterword of the 2000 young adult novel A Question of Will, author Lynne Kositsky addresses the debate over who really wrote Shakespeare's plays, supporting the Oxfordian theory.
Oxfordian theory, and the Shakespeare authorship question in general, is the basis of Amy Freed's 2001 play The Beard of Avon.
Oxfordian theory is central to the plot of Sarah Smith's 2003 novel Chasing Shakespeares.
The 2005 young adult novel Shakespeare's Secret by Elise Broach is centred on the Oxfordian theory.
The Oxfordian theory, among others, is discussed in Jennifer Lee Carrell's 2007 thriller Interred With Their Bones.
The 2011 film Anonymous, directed by Roland Emmerich, portrays the Prince Tudor theory.
The theory is mocked in a 5 minute scene in the 2014 movie The Gambler.
See also
List of Oxfordian theory supporters
Baconian theory
Derbyite theory of Shakespeare authorship
Marlovian theory of Shakespeare authorship
Nevillean theory of Shakespeare authorship
Notes
Footnotes
The UK and US editions of differ significantly in pagination. The citations to the book are to the UK edition and page numbers will reflect that edition.
Citations
References
Bibliography
A'Dair, Mike. Four Essays on the Shakespeare Authorship Question. Verisimilitude Press (6 September 2011)
Austin, Al, and Judy Woodruff. The Shakespeare Mystery. 1989. Frontline documentary film about the Oxford case.
Beauclerk, Charles, Shakespeare's Lost Kingdom: The True History of Shakespeare and Elizabeth. Grove Press (13 April 2010). (Supports Prince Tudor theory.)
Brazil, Robert Sean, Edward de Vere and the Shakespeare Printers. Seattle, WA: Cortical Output, 2010.
Edmondson, Paul, and Wells, Stanley, eds. Shakespeare Beyond Doubt: Evidence, Argument, Controversy. Cambridge University Press (27 May 2013).
Hope, Warren, and Kim Holston. The Shakespeare Controversy: An Analysis of the Authorship Theories (2nd Edition) (Jefferson, NC and London: McFarland and Co., 2009 [first pub. 1992]).
Kreiler, Kurt. Anonymous Shake-Speare. The Man Behind. Munich: Dölling und Galitz, 2011.
Magri, Noemi. Such Fruits Out of Italy: The Italian Renaissance in Shakespeare's Plays and Poems. Buchholz, Germany, Laugwitz Verlag (2014).
Malim, Richard, ed. Great Oxford: Essays on the Life and Work of Edward de Vere, 17th Earl of Oxford, 1550–1604. London: Parapress, 2004.
Rendall, Gerald H. Shakespeare Sonnets and Edward de Vere. London: John Murray, Albemarle Street, 1930.
Roe, Richard Paul. The Shakespeare Guide to Italy: Retracing the Bard's Unknown Travels. New York, HarperCollins Publishers, 2011.
Whalen, Richard. Shakespeare: Who Was He? The Oxford Challenge to the Bard of Avon. Westport, Ct.: Praeger, 1994.
Whittemore, Hank. The Monument: "Shake-Speares Sonnets" by Edward de Vere, 17th Earl of Oxford. Meadow Geese Press (12 April 2005). (Supports Prince Tudor theory.)
Whittemore, Hank. Shakespeare's Son and His Sonnets. Martin and Lawrence Press (1 December 2010). (Supports Prince Tudor theory.)
External links
Sites promoting the Oxfordian theory
The Shakespeare Oxford Fellowship
The De Vere Society of Great Britain
The Shakespeare Authorship Sourcebook
Sites refuting the Oxfordian theory
The Shakespeare Authorship Page
Arguments against Oxford's authorship by Irvin Leigh Matus
Oxfraud: The Man Who Wasn't Hamlet
Conspiracy theories
William Shakespeare |
22679 | https://en.wikipedia.org/wiki/Office%20of%20Strategic%20Services | Office of Strategic Services | The Office of Strategic Services (OSS) was the intelligence agency of the United States during World War II. The OSS was formed as an agency of the Joint Chiefs of Staff (JCS) to coordinate espionage activities behind enemy lines for all branches of the United States Armed Forces. Other OSS functions included the use of propaganda, subversion, and post-war planning.
OSS was dissolved a month after the end of the war. Intelligence tasks were shortly later resumed and carried over by its successors the Department of State's Bureau of Intelligence and Research (INR), and the independent Central Intelligence Agency (CIA).
On December 14, 2016, the organization was collectively honored with a Congressional Gold Medal.
Origin
Prior to the formation of the OSS, the various departments of the executive branch, including the State, Treasury, Navy, and War Departments conducted American intelligence activities on an ad hoc basis, with no overall direction, coordination, or control. The US Army and US Navy had separate code-breaking departments: Signal Intelligence Service and OP-20-G. (A previous code-breaking operation of the State Department, the MI-8, run by Herbert Yardley, had been shut down in 1929 by Secretary of State Henry Stimson, deeming it an inappropriate function for the diplomatic arm, because "gentlemen don't read each other's mail.") The FBI was responsible for domestic security and anti-espionage operations.
President Franklin D. Roosevelt was concerned about American intelligence deficiencies. On the suggestion of William Stephenson, the senior British intelligence officer in the western hemisphere, Roosevelt requested that William J. Donovan draft a plan for an intelligence service based on the British Secret Intelligence Service (MI6) and Special Operations Executive (SOE). After submitting his work, "Memorandum of Establishment of Service of Strategic Information", Colonel Donovan was appointed "coordinator of information" on July 11, 1941, heading the new organization known as the office of the Coordinator of Information (COI).
Thereafter the organization was developed with British assistance; Donovan had responsibilities but no actual powers and the existing US agencies were skeptical if not hostile. Until some months after Pearl Harbor, the bulk of OSS intelligence came from the UK. British Security Co-ordination (BSC) trained the first OSS agents in Canada, until training stations were set up in the US with guidance from BSC instructors, who also provided information on how the SOE was arranged and managed. The British immediately made available their short-wave broadcasting capabilities to Europe, Africa, and the Far East and provided equipment for agents until American production was established.
The Office of Strategic Services was established by a Presidential military order issued by President Roosevelt on June 13, 1942, to collect and analyze strategic information required by the Joint Chiefs of Staff and to conduct special operations not assigned to other agencies. During the war, the OSS supplied policymakers with facts and estimates, but the OSS never had jurisdiction over all foreign intelligence activities. The FBI was left responsible for intelligence work in Latin America, and the Army and Navy continued to develop and rely on their own sources of intelligence.
Activities
OSS proved especially useful in providing a worldwide overview of the German war effort, its strengths and weaknesses. In direct operations it was successful in supporting Operation Torch in French North Africa in 1942, where it identified pro-Allied potential supporters and located landing sites. OSS operations in neutral countries, especially Stockholm, Sweden, provided in-depth information on German advanced technology. The Madrid station set up agent networks in France that supported the Allied invasion of southern France in 1944. Most famous were the operations in Switzerland run by Allen Dulles that provided extensive information on German strength, air defenses, submarine production, and the V-1 and V-2 weapons. It revealed some of the secret German efforts in chemical and biological warfare. Switzerland's station also supported resistance fighters in France, Austria and Italy, and helped with the surrender of German forces in Italy in 1945.
For the duration of World War II, the Office of Strategic Services was conducting multiple activities and missions, including collecting intelligence by spying, performing acts of sabotage, waging propaganda war, organizing and coordinating anti-Nazi resistance groups in Europe, and providing military training for anti-Japanese guerrilla movements in Asia, among other things. At the height of its influence during World War II, the OSS employed almost 24,000 people.
From 1943–1945, the OSS played a major role in training Kuomintang troops in China and Burma, and recruited Kachin and other indigenous irregular forces for sabotage as well as guides for Allied forces in Burma fighting the Japanese Army. Among other activities, the OSS helped arm, train, and supply resistance movements in areas occupied by the Axis powers during World War II, including Mao Zedong's Red Army in China (known as the Dixie Mission) and the Viet Minh in French Indochina. OSS officer Archimedes Patti played a central role in OSS operations in French Indochina and met frequently with Ho Chi Minh in 1945.
One of the greatest accomplishments of the OSS during World War II was its penetration of Nazi Germany by OSS operatives. The OSS was responsible for training German and Austrian individuals for missions inside Germany. Some of these agents included exiled communists and Socialist party members, labor activists, anti-Nazi prisoners-of-war, and German and Jewish refugees. The OSS also recruited and ran one of the war's most important spies, the German diplomat Fritz Kolbe.
From 1943 the OSS was in contact with the Austrian resistance group around Kaplan Heinrich Maier. As a result, plans and production facilities for V-2 rockets, Tiger tanks and aircraft (Messerschmitt Bf 109, Messerschmitt Me 163 Komet, etc.) were passed on to Allied general staffs in order to enable Allied bombers to get accurate air strikes. The Maier group informed very early about the mass murder of Jews through its contacts with the Semperit factory near Auschwitz. The group was gradually dismantled by the German authorities because of a double agent who worked for both the OSS and the Gestapo. This uncovered a transfer of money from the Americans to Vienna via Istanbul and Budapest, and most of the members were executed after a People's Court hearing.
In 1943, the Office of Strategic Services set up operations in Istanbul. Turkey, as a neutral country during the Second World War, was a place where both the Axis and Allied powers had spy networks. The railroads connecting central Asia with Europe, as well as Turkey's close proximity to the Balkan states, placed it at a crossroads of intelligence gathering. The goal of the OSS Istanbul operation called Project Net-1 was to infiltrate and extenuate subversive action in the old Ottoman and Austro-Hungarian Empires.
The head of operations at OSS Istanbul was a banker from Chicago named Lanning "Packy" Macfarland, who maintained a cover story as a banker for the American lend-lease program. Macfarland hired Alfred Schwarz, a Czechoslovakian engineer and businessman who came to be known as "Dogwood" and ended up establishing the Dogwood information chain. Dogwood in turn hired a personal assistant named Walter Arndt and established himself as an employee of the Istanbul Western Electrik Kompani. Through Schwartz and Arndt the OSS was able to infiltrate anti-fascist groups in Austria, Hungary, and Germany. Schwartz was able to convince Romanian, Bulgarian, Hungarian, and Swiss diplomatic couriers to smuggle American intelligence information into these territories and establish contact with elements antagonistic to the Nazis and their collaborators. Couriers and agents memorized information and produced analytical reports; when they were not able to memorize effectively they recorded information on microfilm and hid it in their shoes or hollowed pencils. Through this process information about the Nazi regime made its way to Macfarland and the OSS in Istanbul and eventually to Washington.
While the OSS "Dogwood-chain" produced a lot of information, its reliability was increasingly questioned by British intelligence. By May 1944, through collaboration between the OSS, British intelligence, Cairo, and Washington, the entire Dogwood-chain was found to be unreliable and dangerous. Planting phony information into the OSS was intended to misdirect the resources of the Allies. Schwartz's Dogwood-chain, which was the largest American intelligence gathering tool in occupied territory, was shortly thereafter shut down.
The OSS purchased Soviet code and cipher material (or Finnish information on them) from émigré Finnish army officers in late 1944. Secretary of State Edward Stettinius, Jr., protested that this violated an agreement President Roosevelt made with the Soviet Union not to interfere with Soviet cipher traffic from the United States. General Donovan might have copied the papers before returning them the following January, but there is no record of Arlington Hall receiving them, and CIA and NSA archives have no surviving copies. This codebook was in fact used as part of the Venona decryption effort, which helped uncover large-scale Soviet espionage in North America.
RYPE was the codename of the airborne unit who was dropped in the Norwegian mountains of Snåsa on March 24, 1945 to carry out sabotage actions behind enemy lines. From the base at the Gjefsjøen mountain farm, the group conducted successful railroad sabotages, with the intention of preventing the withdrawal of German forces from northern Norway. Operasjon Rype was the only U.S. operation on German-occupied Norwegian soil during WW2. The group consisted mainly of Norwegian Americans recruited from the 99th Infantry Battalion. Operasjon Rype was led by William Colby.
The OSS sent four teams of two under Captain Stephen Vinciguerra (codename Algonquin, teams Alsace, Poissy, S&S and Student), with Operation Varsity in March 1945 to infiltrate and report from behind enemy lines, but none succeeded. Team S&S had two agents in Wehrmacht uniforms and a captured Kϋbelwagon; to report by radio. But the Kϋbelwagon was put out of action while in the glider; three tires and the long-range radio were shot up (German gunners were told to attack the gliders not the tow planes).
Weapons and gadgets
The OSS espionage and sabotage operations produced a steady demand for highly specialized equipment. General Donovan invited experts, organized workshops, and funded labs that later formed the core of the Research & Development Branch. Boston chemist Stanley P. Lovell became its first head, and Donovan humorously called him his "Professor Moriarty". Throughout the war years, the OSS Research & Development successfully adapted Allied weapons and espionage equipment, and produced its own line of novel spy tools and gadgets, including silenced pistols, lightweight sub-machine guns, "Beano" grenades that exploded upon impact, explosives disguised as lumps of coal ("Black Joe") or bags of Chinese flour ("Aunt Jemima"), acetone time delay fuses for limpet mines, compasses hidden in uniform buttons, playing cards that concealed maps, a 16mm Kodak camera in the shape of a matchbox, tasteless poison tablets ("K" and "L" pills), and cigarettes laced with tetrahydrocannabinol acetate (an extract of Indian hemp) to induce uncontrollable chattiness.
The OSS also developed innovative communication equipment such as wiretap gadgets, electronic beacons for locating agents, and the "Joan-Eleanor" portable radio system that made it possible for operatives on the ground to establish secure contact with a plane that was preparing to land or drop cargo. The OSS Research & Development also printed fake German and Japanese-issued identification cards, and various passes, ration cards, and counterfeit money.
On August 28, 1943, Stanley Lovell was asked to make a presentation in front of a hostile Joint Chiefs of Staff, who were skeptical of OSS plans beyond collecting military intelligence and were ready to split the OSS between the Army and the Navy. While explaining the purpose and mission of his department and introducing various gadgets and tools, he reportedly casually dropped into a waste basket a Hedy, a panic-inducing explosive device in the shape of a firecracker, which shortly produced a loud shrieking sound followed by a deafening boom. The presentation was interrupted and did not resume since everyone in the room fled. In reality, the Hedy, jokingly named after Hollywood movie star Hedy Lamarr for her ability to distract men, later saved the lives of some trapped OSS operatives.
Not all projects worked. Some ideas were odd, such as a failed attempt to use insects to spread anthrax in Spain. Stanley Lovell was later quoted saying, "It was my policy to consider any method whatever that might aid the war, however unorthodox or untried".
In 1939, a young physician named Christian J. Lambertsen developed an oxygen rebreather set (the Lambertsen Amphibious Respiratory Unit) and demonstrated it to the OSS—after already being rejected by the U.S. Navy—in a pool at the Shoreham Hotel in Washington D.C., in 1942. The OSS not only bought into the concept, they hired Lambertsen to lead the program and build up the dive element for the organization. His responsibilities included training and developing methods of combining self-contained diving and swimmer delivery including the Lambertsen Amphibious Respiratory Unit for the OSS "Operational Swimmer Group". Growing involvement of the OSS with coastal infiltration and water-based sabotage eventually led to creation of the OSS Maritime Unit.
Facilities
At Camp X, near Whitby, Ontario, an "assassination and elimination" training program was operated by the British Special Operations Executive, assigning exceptional masters in the art of knife-wielding combat, such as William E. Fairbairn and Eric A. Sykes, to instruct trainees. Many members of the Office of Strategic Services also were trained there. It was dubbed "the school of mayhem and murder" by George Hunter White who trained at the facility in the 1950s.
From these incipient beginnings, the OSS began to take charge of its own destiny, and opened camps in the United States, and finally abroad. Prince William Forest Park (then known as Chopawamsic Recreational Demonstration Area) was the site of an OSS training camp that operated from 1942 to 1945. Area "C", consisting of approximately , was used extensively for communications training, whereas Area "A" was used for training some of the OGs (Operational Groups). Catoctin Mountain Park, now the location of Camp David, was the site of OSS training Area "B" where the first Special Operations, or SO, were trained. Special Operations was modeled after Great Britain's Special Operations Executive, which included parachute, sabotage, self-defense, weapons, and leadership training to support guerrilla or partisan resistance. Considered most mysterious of all was the "cloak and dagger" Secret Intelligence, or SI branch. Secret Intelligence employed "country estates as schools for introducing recruits into the murky world of espionage. Thus, it established Training Areas E and RTU-11 ("the Farm") in spacious manor houses with surrounding horse farms." Morale Operations training included psychological warfare and propaganda. The Congressional Country Club (Area F) in Bethesda, Maryland, was the primary OSS training facility. The Facilities of the Catalina Island Marine Institute at Toyon Bay on Santa Catalina Island, Calif., are composed (in part) of a former OSS survival training camp. The National Park Service commissioned a study of OSS National Park training facilities by Professor John Chambers of Rutgers University.
The main OSS training camps abroad were located initially in Great Britain, French Algeria, and Egypt; later as the Allies advanced, a school was established in southern Italy. In the Far East, OSS training facilities were established in India, Ceylon, and then China. The London branch of the OSS, its first overseas facility, was at 70 Grosvenor Street, W1. In addition to training local agents, the overseas OSS schools also provided advanced training and field exercises for graduates of the training camps in the United States and for Americans who enlisted in the OSS in the war zones. The most famous of the latter was Virginia Hall in France.
The OSS's Mediterranean training center in Cairo, Egypt, known to many as the Spy School, was a lavish palace belonging to King Farouk's brother-in-law, called Ras el Kanayas. It was modeled after the SOE's training facility STS 102 in Haifa, Palestine. Americans whose heritage stemmed from Kingdom of Italy, Kingdom of Yugoslavia, and Kingdom of Greece were trained at the "Spy School" and also sent for parachute, weapons and commando training, and Morse code and encryption lessons at STS 102. After completion of their spy training, these agents were sent back on missions to the Balkans and Italy where their accents would not pose a problem for their assimilation.
Personnel
The names of all 13,000 OSS personnel and documents of their OSS service, previously a closely guarded secret, were released by the US National Archives on August 14, 2008. Among the 24,000 names were those of Sterling Hayden, Carl C. Cable, Julia Child, Ralph Bunche, Arthur Goldberg, Saul K. Padover, Arthur Schlesinger, Jr., Bruce Sundlun, William Colby, Rene Joyeuse MD and John Ford. The 750,000 pages in the 35,000 personnel files include applications of people who were not recruited or hired, as well as the service records of those who served.
OSS soldiers were primarily inducted from the United States Armed Forces. Other members included foreign nationals including displaced individuals from the former czarist Russia, an example being Prince Serge Obolensky.
Donovan sought independent thinkers, and in order to bring together those many intelligent, quick-witted individuals who could think out-of-the box, he chose them from all walks of life, backgrounds, without distinction to culture or religion. Donovan was quoted as saying, "I'd rather have a young lieutenant with enough guts to disobey a direct order than a colonel too regimented to think for himself." In a matter of a few short months, he formed an organization which equalled and then rivalled Great Britain's Secret Intelligence Service and its Special Operations Executive. Donovan, inspired by Britain's SOE, assembled an outstanding group of clinical psychologists to carry out evaluations of potential OSS candidates at a variety of sites, primary among these was Station S in Northern Virginia near where Dulles International Airport now stands. Recent research from remaining records from the OSS Station S program describes how those characteristics (independent thought, effective intelligence, interpersonal skills) were found among OSS candidates
One such agent was Ivy league polyglot and Jewish-American baseball catcher Moe Berg, who played 15 seasons in the major leagues. As a Secret Intelligence agent, he was dispatched to seek information on German physicist Werner Heisenberg and his knowledge on the atomic bomb. One of the most highly decorated and flamboyant OSS soldiers was US Marine Colonel Peter Ortiz. Enlisting early in the war, as a French Foreign Legionnaire, he went on to join the OSS and to be the most highly decorated US Marine in the OSS during World War II. Julia Child, who later authored cookbooks, worked directly under Donovan.
Rene Joyeuse M.D., MS, FACS was a Swiss, French and American soldier, physician and researcher, who distinguished himself as an agent of Allied intelligence in German-occupied France during World War II. He received the US Army Distinguished Service Cross for his actions with the OSS, after the war he became a Physician, Researcher and was a co-founder of The American Trauma Society.
"Jumping Joe" Savoldi (code name Sampson) was recruited by the OSS in 1942 because of his hand-to-hand combat and language skills as well as his deep knowledge of the Italian geography and Benito Mussolini's compound. He was assigned to the Special Operations branch and took part in missions in North Africa, Italy, and France during 1943–1945.
One of the forefathers of today's commandos was Navy Lieutenant Jack Taylor. He was sequestered by the OSS early in the war and had a long career behind enemy lines.
Taro and Mitsu Yashima, both Japanese political dissidents who were imprisoned in Japan for protesting its militarist regime, worked for the OSS in psychological warfare against the Japanese Empire.
Nisei linguists
In late 1943, a representative from OSS visited the 442nd Infantry Regiment looking to recruit volunteers willing to undertake "extremely hazardous assignment." All selected were Nisei. The recruits were assigned to OSS Detachments 101 and 202, in the China-Burma-India Theater. "Once deployed, they were to interrogate prisoners, translate documents, monitor radio communications, and conduct covert operations... Detachment 101 and 102's clandestine operations were extremely successful."
Dissolution into other agencies
On September 20, 1945, President Truman signed Executive Order 9621, terminating the OSS. The State Department took over the Research and Analysis Branch; it became the Bureau of Intelligence and Research, The War Department took over the Secret Intelligence (SI) and Counter-Espionage (X-2) Branches, which were then housed in the new Strategic Services Unit (SSU). Brigadier General John Magruder (formerly Donovan's Deputy Director for Intelligence in OSS) became the new SSU director. He oversaw the liquidation of the OSS and managed the institutional preservation of its clandestine intelligence capability.
In January 1946, President Truman created the Central Intelligence Group (CIG), which was the direct precursor to the CIA. SSU assets, which now constituted a streamlined "nucleus" of clandestine intelligence, were transferred to the CIG in mid-1946 and reconstituted as the Office of Special Operations (OSO). The National Security Act of 1947 established the Central Intelligence Agency, which then took up some OSS functions. The direct descendant of the paramilitary component of the OSS is the CIA Special Activities Division.
Today, the joint-branch United States Special Operations Command, founded in 1987, uses the same spearhead design on its insignia, as homage to its indirect lineage. The Defense Intelligence Agency currently manages the OSS’ mandate to coordinate human espionage activities across the United States Armed Forces.
Branches
Censorship and Documents
Field Experimental Unit
Foreign Nationalities
Maritime Unit
Morale Operations Branch
Operational Group Command
Research & Analysis
Secret Intelligence
Security
Special Operations
Special Projects
X-2 (counterespionage)
Detachments
OSS Deer Team: Vietnam
OSS Detachment 101: Burma
OSS Detachment 202: China
OSS Detachment 303: New Delhi, India
OSS Detachment 404: attached to British South East Asia Command in Kandy, Ceylon
OSS Detachment 505: Calcutta, India
US Army units attached to the OSS
2671st Special Reconnaissance Battalion
2677th Office of Strategic Services Regiment
In popular culture
Comics
The OSS was a featured organization in DC Comics, introduced in G.I. Combat #192 (July 1976). Led by the mysterious Control, they operated as an espionage unit, initially in Nazi-occupied France. The organization would later become Argent.
The alter ego of the DC Comics superheroine Wonder Woman, Diana Prince, works for Major Steve Trevor at the OSS. In this position, she found herself privy to intelligence on Axis operations in the United States, and many times foiled agents of Nazi Germany, Imperial Japan, and Fascist Italy in their attempts to defeat the Allies and achieve world domination.
Films
The Paramount film O.S.S. (1946), starring Alan Ladd and Geraldine Fitzgerald, showed agents training and on a dangerous mission. Commander John Shaheen acted as technical advisor.
The film 13 Rue Madeleine (1946) stars James Cagney as an OSS agent who must find a mole in French partisan operations. Peter Ortiz acted as technical advisor.
The film Cloak and Dagger (1946) stars Gary Cooper as a scientist recruited to OSS to exfiltrate a German scientist defecting to the allies with the help of a woman guerrilla and her partisans. E. Michael Burke acted as technical advisor.
In the film Charade (1963), Carson Dyle (Walter Matthau) explains the CIA and OSS to Reggie Lampert (Audrey Hepburn).
In The Good Shepherd (2006), Matt Damon plays Edward Wilson, a Skull and Bones recruit who joins the OSS to help with a mission in London. He quickly gains rank as the head of the newly formed CIA's counterintelligence service.
The biographical film Flash of Genius (2008) is about famed American inventor and OSS veteran Robert Kearns.
In the film Indiana Jones and the Kingdom of the Crystal Skull 2008, it is indicated that Indiana Jones worked for the OSS and attained the rank of Colonel.
In the film Inglourious Basterds (2009), directed by Quentin Tarantino, the titular "basterds" are members of an OSS commando squad in occupied-France, although no such OSS unit ever actually existed.
The film Julie & Julia (2009) includes flashback scenes depicting Julia Child's wartime service with the OSS.
The Real Inglorious Bastards (2012), a short film documentary, directed by Min Sook Lee, is about the OSS officers such as Frederick Mayer (spy), Hans Wijnberg, and Franz Weber, who volunteered to operate behind enemy lines, e.g., during "Operation Greenup", to defeat the German armed forces.
Camp X: Secret Agent School (2014), a YAP Films documentary for History Channel (Canada), portrays the first spy school in North America, OSS agents, their training at Camp X, and their missions behind enemy lines.
World War II Spy School (2014), a YAP Films documentary for the Smithsonian Channel, portrays Camp X and the other training sites overseas, as well as OSS agents and their missions.
Games
'Tabletop roleplaying games'
The OSS also is mentioned in Pelgrane Press The Fall of DELTA GREEN. Player Characters can be ex-OSS agents in other agencies such as the CIA, which can be beneficial due to the claim and carry authenticity, experience and authority due to their past career in the OSS.
Video games
In Call of Duty: World at War (2008), Dr. Peter McCain is an OSS spy.
In Indiana Jones and the Infernal Machine (1999), the main female character, Sophia Hapgood, is an OSS (later CIA) agent.
Most games in the Medal of Honor video game franchise feature a fictional OSS agent as the main character.
In the 2012 game Sniper Elite V2 and its prequels Sniper Elite III and Sniper Elite 4, the protagonist is an SOE turned OSS agent sniper.
In the Wolfenstein series video game series, the main character is a member of a fictional organisation called the OSA (Office of Secret Actions), which is inspired by the OSS.
In Tom Clancy's The Division 2, one of the games several hidden side missions, known as The Navy Hill Transmission, has the Agent searching the western part of Washington D.C. for the source of a mysterious encoded transmission which ends up leading him/her to an old underground OSS Bunker.
It is featured in Hearts of Iron IV in the 2020 expansion, La Resistance, as the United States' Secret Agency.
Literature
Jean Bruce's French pulp fiction series, OSS 117, follows the adventure of Hubert Bonisseur de la Bath, alias OSS 117, a French operative working for the OSS. The original series (four or five books a year) lasted from 1949 to 1963, until the death of Jean Bruce, and was continued by his wife and children until 1992. Numerous films were made from it in the 1960s, and in 2006 a nostalgic comedy was made, celebrating the spy movie genre, OSS 117: Cairo, Nest of Spies, with Jean Dujardin playing OSS 117. A sequel followed in 2009 called OSS 117: Lost in Rio (original title in French: OSS 117: Rio Ne Répond Plus).
In Allen Ginsberg's 1975 poem 'Hadda Be Playing on the Jukebox', the OSS is referenced as having employed "Corsican goons" to break the 1948 Marseille dock strike and to have been involved in the smuggling of "Indochina heroin" in the 1960s.
W.E.B. Griffin's Honor Bound and Men At War series revolve around fictional OSS operations. Some of his characters in The Corps Series also are recruited by the OSS, notably Ken McCoy, Edward Banning, and Fleming Pickering.
Roger Wolcott Hall's book, You're Stepping on My Cloak and Dagger (1957), is a witty look at Hall's experiences with the OSS.
The OSS also appears in William Stevenson's book Intrepid's Last Case (1986).
Television
In the American animated comedy series Archer, the character Malory Archer (mother of the main character Sterling Archer) is a former O.S.S agent.
One of the characters in the Ellery Queen episode, "The Adventure of Colonel Niven's Memoirs" (1975), identifies himself as "Major George Pearson, O.S.S."; he offers some Soviet diplomats political asylum.
In 1957–1958 Ron Randell starred in the series O.S.S. In Knight Rider, Devon Miles mentions that he served in OSS during World War II.
In the X-Files Season 6 episode, "Triangle", the woman from the 1939 scenes portrayed by Gillian Anderson as Scully is a member of OSS.
See also
Charles Douglas Jackson
Operation Halyard
Operation Jedburgh
Operation Paperclip
OSS Detachment 101 operated in the China Burma India Theater of World War II.
Paramarines
Special Forces (United States Army)
Special Operations Executive
X-2 Counter Espionage Branch
Central Intelligence Agency
History of espionage
Notes
References
Further reading
Albarelli, H.P. A Terrible Mistake: The Murder of Frank Olson and the CIA's Secret Cold War Experiments (2009)
Aldrich, Richard J. Intelligence and the War Against Japan: Britain, America and the Politics of Secret Service (Cambridge: Cambridge University Press, 2000)
Alsop, Stewart and Braden, Thomas. Sub Rosa: The OSS and American Espionage (New York: Reynal & Hitchcock, 1946)
Bank, Aaron. From OSS to Green Berets: The Birth of Special Forces (Novato, CA: Presidio, 1986)
Bartholomew-Feis, Dixee R. The OSS and Ho Chi Minh: Unexpected Allies in the War against Japan (Lawrence : University Press of Kansas, 2006)
Bernstein, Barton J. "Birth of the U.S. biological warfare program" Scientific American 256: 116 – 121, 1987.
Brown, Anthony Cave. The Last Hero: Wild Bill Donovan (New York: Times Books, 1982)
Brunner, John W. OSS Weapons. Phillips Publications, Williamstown, N.J., 1994. .
Brunner, John W. OSS Weapons II. Phillips Publications, Williamstown, N.J., 2005. .
Brunner, John W. OSS Crossbows. Phillips Publications, Williamstown, N.J., 1991. .
Burke, Michael. "Outrageous Good Fortune: A Memoir" (Boston-Toronto: Little, Brown and Company)
Casey, William J. The Secret War Against Hitler (Washington: Regnery Gateway, 1988)
Chalou, George C. (ed.) The Secrets War: The Office of Strategic Services in World War II (Washington: National Archives and Records Administration, 1991)
Chambers II, John Whiteclay. OSS Training in the National Parks and Service Abroad in World War II (NPS, 2008) online; chapters 1-2 and 8-11 provide a useful summary history of OSS by a scholar.
Dawidoff, Nicholas. The Catcher was a Spy: The Mysterious Life of Moe Berg ( New York: Vintage Books, 1994)
Doundoulakis, Helias. Trained to be an OSS Spy (Xlibris, 2014) .
Dulles, Allen. The Secret Surrender (New York: Harper & Row, 1966)
Dunlop, Richard. Donovan: America's Master Spy (Chicago: Rand McNally, 1982)
Ford, Corey. Donovan of OSS (Boston: Little, Brown, 1970)
Ford, Corey, MacBain A. "Cloak and Dagger: The Secret Story of O.S.S." (New York: Random House 1945,1946)
Grose, Peter. Gentleman Spy: The Life of Allen Dulles (Boston: Houghton Mifflin, 1994)
Hassell, A, and MacRae, S: Alliance of Enemies: The Untold Story of the Secret American and German Collaboration to End World War II, Thomas Dunne Books, 2006.
Hunt, E. Howard. American Spy, 2007
Jakub, Jay. Spies and Saboteurs: Anglo-American Collaboration and Rivalry in Human Intelligence Collection and Special Operations, 1940–45 (New York: St. Martin's, 1999)
Jones, Ishmael. The Human Factor: Inside the CIA's Dysfunctional Intelligence Culture (New York: Encounter Books, 2008, rev 2010)
Katz, Barry M. Foreign Intelligence: Research and Analysis in the Office of Strategic Services, 1942–1945 (Cambridge: Harvard University Press, 1989)
Kent, Sherman. Strategic Intelligence for American Foreign Policy (Hamden, CT: Archon, 1965 [1949])
McIntosh, Elizabeth P. Sisterhood of Spies: The Women of the OSS (Annapolis, MD: Naval Institute Press, 1998)
Mauch, Christof. The Shadow War Against Hitler: The Covert Operations of America's Wartime Secret Intelligence Service (2005), scholarly history of OSS.
Melton, H. Keith. OSS Special Weapons and Equipment: Spy Devices of World War II (New York: Sterling Publishing, 1991)
Moulin, Pierre. U.S. Samurais in Bruyeres (CPL Editions: Luxembourg, 1993)
Paulson, A.C. 1989. OSS Silenced Pistol. Machine Gun News. 3(6):28-30.
Paulson, A.C. 1995. OSS Weapons. Fighting Firearms. 3(2):20-21,80-81.
Paulson, A.C. 2002. HDMS silenced .22 pistols in Vietnam. The Small Arms Review. 5(7):119-120.
Paulson, A.C. 2003. WWII vintage silent .22LR [High Standard OSS HDMS pistol]. Guns & Weapons for Law Enforcement. 15(2):24-29,72.
Persico, Joseph E. Roosevelt's Secret War: FDR and World War II Espionage (2001).
Persico, Joseph E. Piercing the Reich: The Penetration of Nazi Germany by American Secret Agents During World War II (New York: Viking, 1979) Reprinted in 1997 by Barnes & Noble Books.
Peterson, Neal H. (ed.) From Hitler's Doorstep: The Wartime Intelligence Reports of Allen Dulles, 1942–1945 (University Park: Pennsylvania State University Press, 1996)
Pinck, Daniel C. Journey to Peking: A Secret Agent in Wartime China (Naval Institute Press, 2003)
Pinck, Daniel C., Jones, Geoffrey M.T. and Pinck, Charles T. (eds.) Stalking the History of the Office of Strategic Services: An OSS Bibliography (Boston: OSS/Donovan Press, 2000)
Roosevelt, Kermit (ed.) War Report of the OSS, two volumes (New York: Walker, 1976)
Rudgers, David F. Creating the Secret State: The Origins of the Central Intelligence Agency, 1943–1947 (Lawrence, KS: University of Kansas Press, 2000)
Smith, Bradley F. and Agarossi, Elena. Operation Sunrise: The Secret Surrender (New York: Basic Books, 1979)
Smith, Bradley F. The Shadow Warriors: OSS and the Origins of the CIA (New York: Basic, 1983)
Smith, Richard Harris. OSS: The Secret History of America's First Central Intelligence Agency (Berkeley: University of California Press, 1972; Guilford, CT: Lyons Press, 2005)
Steury, Donald P. The Intelligence War (New York: Metrobooks, 2000)
Troy, Thomas F. Donovan and the CIA: A History of the Establishment of the Central Intelligence Agency (Frederick, MD: University Publications of America, 1981)
Troy, Thomas F. Wild Bill & Intrepid (New Haven: Yale University Press, 1996)
Waller, John H. The Unseen War in Europe: Espionage and Conspiracy in the Second World War (New York: Random House, 1996)
Warner, Michael. The Office of Strategic Services: America's First Intelligence Agency (Washington, D.C.: Central Intelligence Agency, 2001)
Yu, Maochun. OSS in China: Prelude to Cold War'' (New Haven: Yale University Press, 1996)
External links
"The Office of Strategic Services: America's First Intelligence Agency"
National Park Service Report on OSS Training Facilities
Collection of Documents at the Franklin D. Roosevelt Presidential Museum and Library, Part 1 and Part 2
The OSS Society
OSS Reborn
Office of Strategic Services collection at Internet Archive
1942 establishments in the United States
1945 disestablishments in the United States
Agencies of the United States government during World War II
Congressional Gold Medal recipients
Defunct United States intelligence agencies
Government agencies disestablished in 1945
Government agencies established in 1942
Intelligence services of World War II
World War II espionage
World War II resistance movements |
22739 | https://en.wikipedia.org/wiki/Obfuscation%20%28software%29 | Obfuscation (software) | In software development, obfuscation is the deliberate act of creating source or machine code that is difficult for humans to understand. Like obfuscation in natural language, it may use needlessly roundabout expressions to compose statements. Programmers may deliberately obfuscate code to conceal its purpose (security through obscurity) or its logic or implicit values embedded in it, primarily, in order to prevent tampering, deter reverse engineering, or even to create a puzzle or recreational challenge for someone reading the source code. This can be done manually or by using an automated tool, the latter being the preferred technique in industry.
Overview
The architecture and characteristics of some languages may make them easier to obfuscate than others. C, C++, and the Perl programming language are some examples of languages easy to obfuscate. Haskell (programming language) is also quite obfuscatable despite being quite different in structure.
The properties that make a language obfuscatable are not immediately obvious.
Recreational obfuscation
Writing and reading obfuscated source code can be a brain teaser. A number of programming contests reward the most creatively obfuscated code, such as the International Obfuscated C Code Contest and the Obfuscated Perl Contest.
Types of obfuscations include simple keyword substitution, use or non-use of whitespace to create artistic effects, and self-generating or heavily compressed programs.
According to Nick Montfort, techniques may include:
naming obfuscation, which includes naming variables in a meaningless or deceptive way;
data/code/comment confusion, which includes making some actual code look like comments or confusing syntax with data;
double coding, which can be displaying code in poetry form or interesting shapes.
Short obfuscated Perl programs may be used in signatures of Perl programmers. These are JAPHs ("Just another Perl hacker").
Examples
This is a winning entry from the International Obfuscated C Code Contest written by Ian Phillipps in 1988 and subsequently reverse engineered by Thomas Ball.
/*
LEAST LIKELY TO COMPILE SUCCESSFULLY:
Ian Phillipps, Cambridge Consultants Ltd., Cambridge, England
*/
#include <stdio.h>
main(t,_,a)
char
*
a;
{
return!
0<t?
t<3?
main(-79,-13,a+
main(-87,1-_,
main(-86, 0, a+1 )
+a)):
1,
t<_?
main(t+1, _, a )
:3,
main ( -94, -27+t, a )
&&t == 2 ?_
<13 ?
main ( 2, _+1, "%s %d %d\n" )
:9:16:
t<0?
t<-72?
main( _, t,
"@n'+,#'/*{}w+/w#cdnr/+,{}r/*de}+,/*{*+,/w{%+,/w#q#n+,/#{l,+,/n{n+,/+#n+,/#;\
#q#n+,/+k#;*+,/'r :'d*'3,}{w+K w'K:'+}e#';dq#'l q#'+d'K#!/+k#;\
q#'r}eKK#}w'r}eKK{nl]'/#;#q#n'){)#}w'){){nl]'/+#n';d}rw' i;# ){nl]!/n{n#'; \
r{#w'r nc{nl]'/#{l,+'K {rw' iK{;[{nl]'/w#q#\
\
n'wk nw' iwk{KK{nl]!/w{%'l##w#' i; :{nl]'/*{q#'ld;r'}{nlwb!/*de}'c ;;\
{nl'-{}rw]'/+,}##'*}#nc,',#nw]'/+kd'+e}+;\
#'rdq#w! nr'/ ') }+}{rl#'{n' ')# }'+}##(!!/")
:
t<-50?
_==*a ?
putchar(31[a]):
main(-65,_,a+1)
:
main((*a == '/') + t, _, a + 1 )
:
0<t?
main ( 2, 2 , "%s")
:*a=='/'||
main(0,
main(-61,*a, "!ek;dc i@bK'(q)-[w]*%n+r3#l,{}:\nuwloca-O;m .vpbks,fxntdCeghiry")
,a+1);}
It is a C program that when compiled and run will generate the 12 verses of The 12 Days of Christmas. It contains all the strings required for the poem in an encoded form within the code.
A non-winning entry from the same year, this next example illustrates creative use of whitespace; it generates mazes of arbitrary length:
char*M,A,Z,E=40,J[40],T[40];main(C){for(*J=A=scanf(M="%d",&C);
-- E; J[ E] =T
[E ]= E) printf("._"); for(;(A-=Z=!Z) || (printf("\n|"
) , A = 39 ,C --
) ; Z || printf (M ))M[Z]=Z[A-(E =A[J-Z])&&!C
& A == T[ A]
|6<<27<rand()||!C&!Z?J[T[E]=T[A]]=E,J[T[A]=A-Z]=A,"_.":" |"];}
ANSI-compliant C compilers don't allow constant strings to be overwritten, which can be avoided by changing "*M" to "M[3]" and omitting "M=".
The following example by Óscar Toledo Gutiérrez, Best of Show entry in the 19th IOCCC, implements an 8080 emulator complete with terminal and disk controller, capable of booting CP/M-80 and running CP/M applications:
#include <stdio.h>
#define n(o,p,e)=y=(z=a(e)%16 p x%16 p o,a(e)p x p o),h(
#define s 6[o]
#define p z=l[d(9)]|l[d(9)+1]<<8,1<(9[o]+=2)||++8[o]
#define Q a(7)
#define w 254>(9[o]-=2)||--8[o],l[d(9)]=z,l[1+d(9)]=z>>8
#define O )):((
#define b (y&1?~s:s)>>"\6\0\2\7"[y/2]&1?0:(
#define S )?(z-=
#define a(f)*((7&f)-6?&o[f&7]:&l[d(5)])
#define C S 5 S 3
#define D(E)x/8!=16+E&198+E*8!=x?
#define B(C)fclose((C))
#define q (c+=2,0[c-2]|1[c-2]<<8)
#define m x=64&x?*c++:a(x),
#define A(F)=fopen((F),"rb+")
unsigned char o[10],l[78114],*c=l,*k=l
#define d(e)o[e]+256*o[e-1]
#define h(l)s=l>>8&1|128&y|!(y&255)*64|16&z|2,y^=y>>4,y^=y<<2,y^=~y>>1,s|=y&4
+64506; e,V,v,u,x,y,z,Z; main(r,U)char**U;{
{ { { } } } { { { } } } { { { } } } { { { } } }
{ { { } } } { { { } } } { { { } } } { { { } } }
{ { { } } } { { { } } } { { { } } } { { { } } }
{ { { } } } { { { } } } { { { } } } { { { } } }
{ { { } } } { { { } } } { { { } } } { { { } } }
{ { { } } } { { { } } } { { { } } } { { { } } }
{ { ; } } { { { } } } { { ; } } { { { } } }
{ { { } } } { { { } } } { { { } } } { { { } } }
{ { { } } } { { { } } } { { { } } } { { { } } }
{ { { } } } { { { } } } { { { } } } { { { } } }
{ { { } } } { { { } } } { { { } } } { { { } } }
{ { { } } } { { { } } } { { { } } } { { { } } }
{ { { } } } { { { } } } { { { } } } { { { } } }
for(v A((u A((e A((r-2?0:(V A(1[U])),"C")
),system("stty raw -echo min 0"),fread(l,78114,1,e),B(e),"B")),"A")); 118-(x
=*c++); (y=x/8%8,z=(x&199)-4 S 1 S 1 S 186 S 2 S 2 S 3 S 0,r=(y>5)*2+y,z=(x&
207)-1 S 2 S 6 S 2 S 182 S 4)?D(0)D(1)D(2)D(3)D(4)D(5)D(6)D(7)(z=x-2 C C C C
C C C C+129 S 6 S 4 S 6 S 8 S 8 S 6 S 2 S 2 S 12)?x/64-1?((0 O a(y)=a(x) O 9
[o]=a(5),8[o]=a(4) O 237==*c++?((int (*)())(2-*c++?fwrite:fread))(l+*k+1[k]*
256,128,1,(fseek(y=5[k]-1?u:v,((3[k]|4[k]<<8)<<7|2[k])<<7,Q=0),y)):0 O y=a(5
),z=a(4),a(5)=a(3),a(4)=a(2),a(3)=y,a(2)=z O c=l+d(5) O y=l[x=d(9)],z=l[++x]
,x[l]=a(4),l[--x]=a(5),a(5)=y,a(4)=z O 2-*c?Z||read(0,&Z,1),1&*c++?Q=Z,Z=0:(
Q=!!Z):(c++,Q=r=V?fgetc(V):-1,s=s&~1|r<0) O++c,write(1,&7[o],1) O z=c+2-l,w,
c=l+q O p,c=l+z O c=l+q O s^=1 O Q=q[l] O s|=1 O q[l]=Q O Q=~Q O a(5)=l[x=q]
,a(4)=l[++x] O s|=s&16|9<Q%16?Q+=6,16:0,z=s|=1&s|Q>159?Q+=96,1:0,y=Q,h(s<<8)
O l[x=q]=a(5),l[++x]=a(4) O x=Q%2,Q=Q/2+s%2*128,s=s&~1|x O Q=l[d(3)]O x=Q /
128,Q=Q*2+s%2,s=s&~1|x O l[d(3)]=Q O s=s&~1|1&Q,Q=Q/2|Q<<7 O Q=l[d(1)]O s=~1
&s|Q>>7,Q=Q*2|Q>>7 O l[d(1)]=Q O m y n(0,-,7)y) O m z=0,y=Q|=x,h(y) O m z=0,
y=Q^=x,h(y) O m z=Q*2|2*x,y=Q&=x,h(y) O m Q n(s%2,-,7)y) O m Q n(0,-,7)y) O
m Q n(s%2,+,7)y) O m Q n(0,+,7)y) O z=r-8?d(r+1):s|Q<<8,w O p,r-8?o[r+1]=z,r
[o]=z>>8:(s=~40&z|2,Q=z>>8) O r[o]--||--o[r-1]O a(5)=z=a(5)+r[o],a(4)=z=a(4)
+o[r-1]+z/256,s=~1&s|z>>8 O ++o[r+1]||r[o]++O o[r+1]=*c++,r[o]=*c++O z=c-l,w
,c=y*8+l O x=q,b z=c-l,w,c=l+x) O x=q,b c=l+x) O b p,c=l+z) O a(y)=*c++O r=y
,x=0,a(r)n(1,-,y)s<<8) O r=y,x=0,a(r)n(1,+,y)s<<8))));
system("stty cooked echo"); B((B((V?B(V):0,u)),v)); }
//print("Hello world")
An example of a JAPH:
@P=split//,".URRUU\c8R";@d=split//,"\nrekcah xinU / lreP rehtona tsuJ";sub p{
@p{"r$p","u$p"}=(P,P);pipe"r$p","u$p";++$p;($q*=2)+=$f=!fork;map{$P=$P[$f^ord
($p{$_})&6];$p{$_}=/ ^$P/ix?$P:close$_}keys%p}p;p;p;p;p;map{$p{$_}=~/^[P.]/&&
close$_}%p;wait until$?;map{/^r/&&<$_>}%p;$_=$d[$q];sleep rand(2)if/\S/;print
This slowly displays the text "Just another Perl / Unix hacker", multiple characters at a time, with delays.
Some Python examples can be found in the official Python programming FAQ and elsewhere.
Advantages of obfuscation
Faster loading time
The scripts used by web-pages have to be sent over the network to the user agent that will run them. The smaller they are, the faster the download. In such use-cases, minification (a relatively trivial form of obfuscation) can produce real advantages.
Reduced memory usage
In antique run-time interpreted languages (more commonly known as script), like older versions of BASIC, programs executed faster and took less RAM if they used single letter variable names, avoided comments and contained only necessary blank characters (in brief, the shorter the faster).
Protection for trade secrets
Where the source code of a program must be sent to the user, for example JavaScript in a web page, any trade secret, licensing mechanism or other intellectual property contained within the program is accessible to the user. Obfuscation makes it harder to understand the code and make modifications to it.
Desktop programs sometimes include features that help to obfuscate their code. Some programs may not store their entire code on disk, and may pull a portion of their binary code via the web at runtime. They may also use compression and/or encryption, adding additional steps to the disassembly process.
Prevention of circumvention
Obfuscating the program can, in such cases, make it harder for users to circumvent license mechanisms or obtain information the program's supplier wished to hide. It can also be used to make it harder to hack multiplayer games.
Prevention of virus detection
Malicious programs may use obfuscation to disguise what they are really doing. Most users do not even read such programs; and those that do typically have access to software tools that can help them to undo the obfuscation, so this strategy is of limited efficacy.
Disadvantages of obfuscation
While obfuscation can make reading, writing, and reverse-engineering a program difficult and time-consuming, it will not necessarily make it impossible.
It adds time and complexity to the build process for the developers.
It can make debugging issues after the software has been obfuscated extremely difficult.
Once code becomes abandonware and is no longer maintained, hobbyists may want to maintain the program, add mods, or understand it better. Obfuscation makes it hard for end users to do useful things with the code.
Certain kinds of obfuscation (i.e. code that isn't just a local binary and downloads mini binaries from a web server as needed) can degrade performance and/or require Internet.
Decompilers
A decompiler can reverse-engineer source code from an executable or library. Decompilation is sometimes called a man-at-the-end attack, based on the traditional cryptographic attack known as "man-in-the-middle". It puts source code in the hands of the user, although this source code is often difficult to read. The source code is likely to have random function and variable names, incorrect variable types, and use different logic than the original source code (due to compiler optimizations).
Cryptographic obfuscation
Recently, cryptographers have explored the idea of obfuscating code so that reverse-engineering the code is cryptographically hard. This is formalized in the many proposals for indistinguishability obfuscation, a cryptographic primitive that, if possible to build securely, would allow one to construct many other kinds of cryptography, including completely novel types that no one knows how to make. (A stronger notion, black-box obfuscation, was shown impossible in 2001 when researchers constructed programs that cannot be obfuscated in this notion.)
Notifying users of obfuscated code
Some anti-virus softwares, such as AVG AntiVirus, will also alert their users when they land on a website with code that is manually obfuscated, as one of the purposes of obfuscation can be to hide malicious code. However, some developers may employ code obfuscation for the purpose of reducing file size or increasing security. The average user may not expect their antivirus software to provide alerts about an otherwise harmless piece of code, especially from trusted corporations, so such a feature may actually deter users from using legitimate software.
Certain major browsers such as Firefox and Chrome also disallow browser extensions containing obfuscated code.
Obfuscating software
A variety of tools exist to perform or assist with code obfuscation. These include experimental research tools created by academics, hobbyist tools, commercial products written by professionals, and open-source software. Deobfuscation tools also exist that attempt to perform the reverse transformation.
Although the majority of commercial obfuscation solutions work by transforming either program source code, or platform-independent bytecode as used by Java and .NET, there are also some that work directly on compiled binaries.
Obfuscation and copyleft licenses
There has been debate on whether it is illegal to skirt copyleft software licenses by releasing source code in obfuscated form, such as in cases in which the author is less willing to make the source code available. The issue is addressed in the GNU General Public License by requiring the "preferred form for making modifications" to be made available. The GNU website states "Obfuscated 'source code' is not real source code and does not count as source code."
See also
AARD code
Spaghetti code
Write-only language
Decompilation
Esoteric programming language
Quine
Overlapping instructions
Polymorphic code
Hardware obfuscation
Underhanded C Contest
Source-to-source compiler
ProGuard (Java Obfuscator)
Dotfuscator (.Net Obfuscator)
Digital rights management
Indistinguishability obfuscation
Source code beautification
Notes
References
Seyyedhamzeh, Javad, ABCME: A Novel Metamorphic Engine, 17th National Computer Conference, Sharif University of Technology, Tehran, Iran, 2012.
B. Barak, O. Goldreich, R. Impagliazzo, S. Rudich, A. Sahai, S. Vadhan and K. Yang. "On the (Im)possibility of Obfuscating Programs". 21st Annual International Cryptology Conference, Santa Barbara, California, USA. Springer Verlag LNCS Volume 2139, 2001.
External links
The International Obfuscated C Code Contest
Protecting Java Code Via Code Obfuscation, ACM Crossroads, Spring 1998 issue
Can we obfuscate programs?
Yury Lifshits. Lecture Notes on Program Obfuscation (Spring'2005)
c2:BlackBoxComputation
Anti-patterns
Articles with example C code
Obfuscation
Source code
Software obfuscation
Program transformation
es:Ofuscación#Informática |
22747 | https://en.wikipedia.org/wiki/OSI%20model | OSI model | The Open Systems Interconnection model (OSI model) is a conceptual model that characterises and standardises the communication functions of a telecommunication or computing system without regard to its underlying internal structure and technology. Its goal is the interoperability of diverse communication systems with standard communication protocols.
The model partitions the flow of data in a communication system into seven abstraction layers, from the physical implementation of transmitting bits across a communications medium to the highest-level representation of data of a distributed application. Each intermediate layer serves a class of functionality to the layer above it and is served by the layer below it. Classes of functionality are realized in software by standardized communication protocols.
The OSI model was developed starting in the late 1970s to support the emergence of the diverse computer networking methods that were competing for application in the large national networking efforts in the world. In the 1980s, the model became a working product of the Open Systems Interconnection group at the International Organization for Standardization (ISO). While attempting to provide a comprehensive description of networking, the model failed to garner reliance by the software architects in the design of the early Internet, which is reflected in the less prescriptive Internet Protocol Suite, principally sponsored under the auspices of the Internet Engineering Task Force (IETF).
History
In the early- and mid-1970s, networking was largely either government-sponsored (NPL network in the UK, ARPANET in the US, CYCLADES in France) or vendor-developed with proprietary standards, such as IBM's Systems Network Architecture and Digital Equipment Corporation's DECnet. Public data networks were only just beginning to emerge, and these began to use the X.25 standard in the late 1970s.
The Experimental Packet Switched System in the UK circa 1973–1975 identified the need for defining higher level protocols. The UK National Computing Centre publication 'Why Distributed Computing' which came from considerable research into future configurations for computer systems, resulted in the UK presenting the case for an international standards committee to cover this area at the ISO meeting in Sydney in March 1977.
Beginning in 1977, the International Organization for Standardization (ISO) conducted a program to develop general standards and methods of networking. A similar process evolved at the International Telegraph and Telephone Consultative Committee (CCITT, from French: Comité Consultatif International Téléphonique et Télégraphique). Both bodies developed documents that defined similar networking models. The OSI model was first defined in raw form in Washington, DC in February 1978 by Hubert Zimmermann of France and the refined but still draft standard was published by the ISO in 1980.
The drafters of the reference model had to contend with many competing priorities and interests. The rate of technological change made it necessary to define standards that new systems could converge to rather than standardizing procedures after the fact; the reverse of the traditional approach to developing standards. Although not a standard itself, it was a framework in which future standards could be defined.
In 1983, the CCITT and ISO documents were merged to form The Basic Reference Model for Open Systems Interconnection, usually referred to as the Open Systems Interconnection Reference Model, OSI Reference Model, or simply OSI model. It was published in 1984 by both the ISO, as standard ISO 7498, and the renamed CCITT (now called the Telecommunications Standardization Sector of the International Telecommunication Union or ITU-T) as standard X.200.
OSI had two major components, an abstract model of networking, called the Basic Reference Model or seven-layer model, and a set of specific protocols. The OSI reference model was a major advance in the standardisation of network concepts. It promoted the idea of a consistent model of protocol layers, defining interoperability between network devices and software.
The concept of a seven-layer model was provided by the work of Charles Bachman at Honeywell Information Systems. Various aspects of OSI design evolved from experiences with the NPL network, ARPANET, CYCLADES, EIN, and the International Networking Working Group (IFIP WG6.1). In this model, a networking system was divided into layers. Within each layer, one or more entities implement its functionality. Each entity interacted directly only with the layer immediately beneath it and provided facilities for use by the layer above it.
The OSI standards documents are available from the ITU-T as the X.200-series of recommendations. Some of the protocol specifications were also available as part of the ITU-T X series. The equivalent ISO and ISO/IEC standards for the OSI model were available from ISO. Not all are free of charge.
OSI was an industry effort, attempting to get industry participants to agree on common network standards to provide multi-vendor interoperability. It was common for large networks to support multiple network protocol suites, with many devices unable to interoperate with other devices because of a lack of common protocols. For a period in the late 1980s and early 1990s, engineers, organizations and nations became polarized over the issue of which standard, the OSI model or the Internet protocol suite, would result in the best and most robust computer networks. However, while OSI developed its networking standards in the late 1980s, TCP/IP came into widespread use on multi-vendor networks for internetworking.
The OSI model is still used as a reference for teaching and documentation; however, the OSI protocols originally conceived for the model did not gain popularity. Some engineers argue the OSI reference model is still relevant to cloud computing. Others say the original OSI model doesn't fit today's networking protocols and have suggested instead a simplified approach.
Definitions
Communication protocols enable an entity in one host to interact with a corresponding entity at the same layer in another host. Service definitions, like the OSI Model, abstractly describe the functionality provided to an (N)-layer by an (N-1) layer, where N is one of the seven layers of protocols operating in the local host.
At each level N, two entities at the communicating devices (layer N peers) exchange protocol data units (PDUs) by means of a layer N protocol. Each PDU contains a payload, called the service data unit (SDU), along with protocol-related headers or footers.
Data processing by two communicating OSI-compatible devices proceeds as follows:
The data to be transmitted is composed at the topmost layer of the transmitting device (layer N) into a protocol data unit (PDU).
The PDU is passed to layer N-1, where it is known as the service data unit (SDU).
At layer N-1 the SDU is concatenated with a header, a footer, or both, producing a layer N-1 PDU. It is then passed to layer N-2.
The process continues until reaching the lowermost level, from which the data is transmitted to the receiving device.
At the receiving device the data is passed from the lowest to the highest layer as a series of SDUs while being successively stripped from each layer's header or footer until reaching the topmost layer, where the last of the data is consumed.
Standards documents
The OSI model was defined in ISO/IEC 7498 which consists of the following parts:
ISO/IEC 7498-1 The Basic Model
ISO/IEC 7498-2 Security Architecture
ISO/IEC 7498-3 Naming and addressing
ISO/IEC 7498-4 Management framework
ISO/IEC 7498-1 is also published as ITU-T Recommendation X.200.
Layer architecture
The recommendation X.200 describes seven layers, labelled 1 to 7. Layer 1 is the lowest layer in this model.
Layer 1: Physical layer
The physical layer is responsible for the transmission and reception of unstructured raw data between a device and a physical transmission medium. It converts the digital bits into electrical, radio, or optical signals. Layer specifications define characteristics such as voltage levels, the timing of voltage changes, physical data rates, maximum transmission distances, modulation scheme, channel access method and physical connectors. This includes the layout of pins, voltages, line impedance, cable specifications, signal timing and frequency for wireless devices. Bit rate control is done at the physical layer and may define transmission mode as simplex, half duplex, and full duplex. The components of a physical layer can be described in terms of a network topology. Physical layer specifications are included in the specifications for the ubiquitous Bluetooth, Ethernet, and USB standards. An example of a less well-known physical layer specification would be for the CAN standard.
Layer 2: Data link layer
The data link layer provides node-to-node data transfer—a link between two directly connected nodes. It detects and possibly corrects errors that may occur in the physical layer.
It defines the protocol to establish and terminate a connection between two physically connected devices. It also defines the protocol for flow control between them.
IEEE 802 divides the data link layer into two sublayers:
Medium access control (MAC) layer – responsible for controlling how devices in a network gain access to a medium and permission to transmit data.
Logical link control (LLC) layer – responsible for identifying and encapsulating network layer protocols, and controls error checking and frame synchronization.
The MAC and LLC layers of IEEE 802 networks such as 802.3 Ethernet, 802.11 Wi-Fi, and 802.15.4 ZigBee operate at the data link layer.
The Point-to-Point Protocol (PPP) is a data link layer protocol that can operate over several different physical layers, such as synchronous and asynchronous serial lines.
The ITU-T G.hn standard, which provides high-speed local area networking over existing wires (power lines, phone lines and coaxial cables), includes a complete data link layer that provides both error correction and flow control by means of a selective-repeat sliding-window protocol.
Security, specifically (authenticated) encryption, at this layer can be applied with MACSec.
Layer 3: Network layer
The network layer provides the functional and procedural means of transferring packets from one node to another connected in "different networks". A network is a medium to which many nodes can be connected, on which every node has an address and which permits nodes connected to it to transfer messages to other nodes connected to it by merely providing the content of a message and the address of the destination node and letting the network find the way to deliver the message to the destination node, possibly routing it through intermediate nodes. If the message is too large to be transmitted from one node to another on the data link layer between those nodes, the network may implement message delivery by splitting the message into several fragments at one node, sending the fragments independently, and reassembling the fragments at another node. It may, but does not need to, report delivery errors.
Message delivery at the network layer is not necessarily guaranteed to be reliable; a network layer protocol may provide reliable message delivery, but it need not do so.
A number of layer-management protocols, a function defined in the management annex, ISO 7498/4, belong to the network layer. These include routing protocols, multicast group management, network-layer information and error, and network-layer address assignment. It is the function of the payload that makes these belong to the network layer, not the protocol that carries them.
Layer 4: Transport layer
The transport layer provides the functional and procedural means of transferring variable-length data sequences from a source to a destination host, while maintaining the quality of service functions.
The transport layer may control the reliability of a given link through flow control, segmentation/desegmentation, and error control. Some protocols are state- and connection-oriented. This means that the transport layer can keep track of the segments and retransmit those that fail delivery. The transport layer may also provide the acknowledgement of the successful data transmission and sends the next data if no errors occurred. The transport layer creates segments out of the message received from the application layer. Segmentation is the process of dividing a long message into smaller messages.
Reliability, however, is not a strict requirement within the transport layer. Protocols like UDP, for example, are used in applications that are willing to accept some packet loss, reordering, errors or duplication. Streaming media, real-time multiplayer games and voice over IP (VoIP) are examples of applications in which loss of packets is not usually a fatal problem.
OSI defines five classes of connection-mode transport protocols ranging from class 0 (which is also known as TP0 and provides the fewest features) to class 4 (TP4, designed for less reliable networks, similar to the Internet). Class 0 contains no error recovery and was designed for use on network layers that provide error-free connections. Class 4 is closest to TCP, although TCP contains functions, such as the graceful close, which OSI assigns to the session layer. Also, all OSI TP connection-mode protocol classes provide expedited data and preservation of record boundaries. Detailed characteristics of TP0-4 classes are shown in the following table:
An easy way to visualize the transport layer is to compare it with a post office, which deals with the dispatch and classification of mail and parcels sent. A post office inspects only the outer envelope of mail to determine its delivery. Higher layers may have the equivalent of double envelopes, such as cryptographic presentation services that can be read by the addressee only. Roughly speaking, tunnelling protocols operate at the transport layer, such as carrying non-IP protocols such as IBM's SNA or Novell's IPX over an IP network, or end-to-end encryption with IPsec. While Generic Routing Encapsulation (GRE) might seem to be a network-layer protocol, if the encapsulation of the payload takes place only at the endpoint, GRE becomes closer to a transport protocol that uses IP headers but contains complete Layer 2 frames or Layer 3 packets to deliver to the endpoint. L2TP carries PPP frames inside transport segments.
Although not developed under the OSI Reference Model and not strictly conforming to the OSI definition of the transport layer, the Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP) of the Internet Protocol Suite are commonly categorized as layer-4 protocols within OSI.
Transport Layer Security (TLS) does not strictly fit inside the model either. It contains characteristics of the transport and presentation layers.
Layer 5: Session layer
The session layer controls the dialogues (connections) between computers. It establishes, manages and terminates the connections between the local and remote application. It provides for full-duplex, half-duplex, or simplex operation, and establishes procedures for checkpointing, suspending, restarting, and terminating a session. In the OSI model, this layer is responsible for gracefully closing a session. This layer is also responsible for session checkpointing and recovery, which is not usually used in the Internet Protocol Suite. The session layer is commonly implemented explicitly in application environments that use remote procedure calls.
In the modern TCP/IP system, the session layer is non-existent and is simply part of TCP.
Layer 6: Presentation layer
The presentation layer establishes context between application-layer entities, in which the application-layer entities may use different syntax and semantics if the presentation service provides a mapping between them. If a mapping is available, presentation protocol data units are encapsulated into session protocol data units and passed down the protocol stack.
This layer provides independence from data representation by translating between application and network formats. The presentation layer transforms data into the form that the application accepts. This layer formats data to be sent across a network. It is sometimes called the syntax layer. The presentation layer can include compression functions. The Presentation Layer negotiates the Transfer Syntax.
The original presentation structure used the Basic Encoding Rules of Abstract Syntax Notation One (ASN.1), with capabilities such as converting an EBCDIC-coded text file to an ASCII-coded file, or serialization of objects and other data structures from and to XML. ASN.1 effectively makes an application protocol invariant with respect to syntax.
Layer 7: Application layer
The application layer is the OSI layer closest to the end user, which means both the OSI application layer and the user interact directly with the software application. This layer interacts with software applications that implement a communicating component. Such application programs fall outside the scope of the OSI model. Application-layer functions typically include identifying communication partners, determining resource availability, and synchronizing communication. When identifying communication partners, the application layer determines the identity and availability of communication partners for an application with data to transmit. The most important distinction in the application layer is the distinction between the application-entity and the application. For example, a reservation website might have two application-entities: one using HTTP to communicate with its users, and one for a remote database protocol to record reservations. Neither of these protocols have anything to do with reservations. That logic is in the application itself. The application layer has no means to determine the availability of resources in the network.
Cross-layer functions
Cross-layer functions are services that are not tied to a given layer, but may affect more than one layer. Some orthogonal aspects, such as management and security, involve all of the layers (See ITU-T X.800 Recommendation). These services are aimed at improving the CIA triad—confidentiality, integrity, and availability—of the transmitted data.
Cross-layer functions are the norm, in practice, because the availability of a communication service is determined by the interaction between network design and network management protocols.
Specific examples of cross-layer functions include the following:
Security service (telecommunication) as defined by ITU-T X.800 recommendation.
Management functions, i.e. functions that permit to configure, instantiate, monitor, terminate the communications of two or more entities: there is a specific application-layer protocol, common management information protocol (CMIP) and its corresponding service, common management information service (CMIS), they need to interact with every layer in order to deal with their instances.
Multiprotocol Label Switching (MPLS), ATM, and X.25 are 3a protocols. OSI subdivides the Network Layer into three sublayers: 3a) Subnetwork Access, 3b) Subnetwork Dependent Convergence and 3c) Subnetwork Independent Convergence. It was designed to provide a unified data-carrying service for both circuit-based clients and packet-switching clients which provide a datagram-based service model. It can be used to carry many different kinds of traffic, including IP packets, as well as native ATM, SONET, and Ethernet frames. Sometimes one sees reference to a Layer 2.5.
Cross MAC and PHY Scheduling is essential in wireless networks because of the time-varying nature of wireless channels. By scheduling packet transmission only in favourable channel conditions, which requires the MAC layer to obtain channel state information from the PHY layer, network throughput can be significantly improved and energy waste can be avoided.
Programming interfaces
Neither the OSI Reference Model, nor any OSI protocol specifications, outline any programming interfaces, other than deliberately abstract service descriptions. Protocol specifications define a methodology for communication between peers, but the software interfaces are implementation-specific.
For example, the Network Driver Interface Specification (NDIS) and Open Data-Link Interface (ODI) are interfaces between the media (layer 2) and the network protocol (layer 3).
Comparison to other networking suites
The table below presents a list of OSI layers, the original OSI protocols, and some approximate modern matches. It is very important to note that this correspondance is rough: the OSI model contains idiosyncracies not found in later systems such as the IP stack in modern Internet.
Comparison with TCP/IP model
The design of protocols in the TCP/IP model of the Internet does not concern itself with strict hierarchical encapsulation and layering. RFC 3439 contains a section entitled "Layering considered harmful". TCP/IP does recognize four broad layers of functionality which are derived from the operating scope of their contained protocols: the scope of the software application; the host-to-host transport path; the internetworking range; and the scope of the direct links to other nodes on the local network.
Despite using a different concept for layering than the OSI model, these layers are often compared with the OSI layering scheme in the following manner:
The Internet application layer maps to the OSI application layer, presentation layer, and most of the session layer.
The TCP/IP transport layer maps to the graceful close function of the OSI session layer as well as the OSI transport layer.
The internet layer performs functions as those in a subset of the OSI network layer.
The link layer corresponds to the OSI data link layer and may include similar functions as the physical layer, as well as some protocols of the OSI's network layer.
These comparisons are based on the original seven-layer protocol model as defined in ISO 7498, rather than refinements in the internal organization of the network layer.
The OSI protocol suite that was specified as part of the OSI project was considered by many as too complicated and inefficient, and to a large extent unimplementable. Taking the "forklift upgrade" approach to networking, it specified eliminating all existing networking protocols and replacing them at all layers of the stack. This made implementation difficult and was resisted by many vendors and users with significant investments in other network technologies. In addition, the protocols included so many optional features that many vendors' implementations were not interoperable.
Although the OSI model is often still referenced, the Internet protocol suite has become the standard for networking. TCP/IP's pragmatic approach to computer networking and to independent implementations of simplified protocols made it a practical methodology. Some protocols and specifications in the OSI stack remain in use, one example being IS-IS, which was specified for OSI as ISO/IEC 10589:2002 and adapted for Internet use with TCP/IP as .
See also
Common Management Information Service (CMIS)
GOSIP, the (U.S.) Government Open Systems Interconnection Profile
Hierarchical internetworking model
Layer 8
List of information technology initialisms
Management plane
Recursive Internetwork Architecture
Service layer
Further reading
John Day, "Patterns in Network Architecture: A Return to Fundamentals" (Prentice Hall 2007, )
Marshall Rose, "The Open Book" (Prentice-Hall, Englewood Cliffs, 1990)
David M. Piscitello, A. Lyman Chapin, Open Systems Networking (Addison-Wesley, Reading, 1993)
Andrew S. Tanenbaum, Computer Networks, 4th Edition, (Prentice-Hall, 2002)
References
External links
Microsoft Knowledge Base: The OSI Model's Seven Layers Defined and Functions Explained
ISO/IEC standard 7498-1:1994 (PDF document inside ZIP archive) (requires HTTP cookies in order to accept licence agreement)
ITU-T X.200 (the same contents as from ISO)
Cisco Systems Internetworking Technology Handbook
Reference models
Computer-related introductions in 1977
Computer-related introductions in 1979
ISO standards
ITU-T recommendations
ITU-T X Series Recommendations
ISO/IEC 7498 |
23062 | https://en.wikipedia.org/wiki/Post%20Office%20Protocol | Post Office Protocol | In computing, the Post Office Protocol (POP) is an application-layer Internet standard protocol used by e-mail clients to retrieve e-mail from a mail server. POP version 3 (POP3) is the version in common use.
Purpose
The Post Office Protocol provides access via an Internet Protocol (IP) network for a user client application to a mailbox (maildrop) maintained on a mail server. The protocol supports download and delete operations for messages. POP3 clients connect, retrieve all messages, store them on the client computer, and finally delete them from the server. This design of POP and its procedures was driven by the need of users having only temporary Internet connections, such as dial-up access, allowing these users to retrieve e-mail when connected, and subsequently to view and manipulate the retrieved messages when offline.
POP3 clients also have an option to leave mail on the server after download. By contrast, the Internet Message Access Protocol (IMAP) was designed to normally leave all messages on the server to permit management with multiple client applications, and to support both connected (online) and disconnected (offline) modes of operation.
A POP3 server listens on well-known port number 110 for service requests. Encrypted communication for POP3 is either requested after protocol initiation, using the STLS command, if supported, or by POP3S, which connects to the server using Transport Layer Security (TLS) or Secure Sockets Layer (SSL) on well-known TCP port number 995.
Messages available to the client are determined when a POP3 session opens the maildrop, and are identified by message-number local to that session or, optionally, by a unique identifier assigned to the message by the POP server. This unique identifier is permanent and unique to the maildrop and allows a client to access the same message in different POP sessions. Mail is retrieved and marked for deletion by the message-number. When the client exits the session, mail marked for deletion is removed from the maildrop.
History
The first version of the Post Office Protocol, POP1, was specified in RFC 918 (1984). POP2 was specified in RFC 937 (1985).
POP3 is the version in most common use. It originated with RFC 1081 (1988) but the most recent specification is RFC 1939, updated with an extension mechanism (RFC 2449) and an authentication mechanism in RFC 1734. This led to a number of POP implementations such as Pine, POPmail, and other early mail clients.
While the original POP3 specification supported only an unencrypted USER/PASS login mechanism or Berkeley .rhosts access control, today POP3 supports several authentication methods to provide varying levels of protection against illegitimate access to a user's e-mail. Most are provided by the POP3 extension mechanisms. POP3 clients support SASL authentication methods via the AUTH extension. MIT Project Athena also produced a Kerberized version. RFC 1460 introduced APOP into the core protocol. APOP is a challenge/response protocol which uses the MD5 hash function in an attempt to avoid replay attacks and disclosure of the shared secret. Clients implementing APOP include Mozilla Thunderbird, Opera Mail, Eudora, KMail, Novell Evolution, RimArts' Becky!, Windows Live Mail, PowerMail, Apple Mail, and Mutt. RFC 1460 was obsoleted by RFC 1725, which was in turn obsoleted by RFC 1939.
POP4
POP4 exists only as an informal proposal adding basic folder management, multipart message support, as well as message flag management to compete with IMAP; however, its development has not progressed since 2003.
Extensions and specifications
An extension mechanism was proposed in RFC 2449 to accommodate general extensions as well as announce in an organized manner support for optional commands, such as TOP and UIDL. The RFC did not intend to encourage extensions, and reaffirmed that the role of POP3 is to provide simple support for mainly download-and-delete requirements of mailbox handling.
The extensions are termed capabilities and are listed by the CAPA command. With the exception of APOP, the optional commands were included in the initial set of capabilities. Following the lead of ESMTP (RFC 5321), capabilities beginning with an X signify local capabilities.
STARTTLS
The STARTTLS extension allows the use of Transport Layer Security (TLS) or Secure Sockets Layer (SSL) to be negotiated using the STLS command, on the standard POP3 port, rather than an alternate. Some clients and servers instead use the alternate-port method, which uses TCP port 995 (POP3S).
SDPS
Demon Internet introduced extensions to POP3 that allow multiple accounts per domain, and has become known as Standard Dial-up POP3 Service (SDPS). To access each account, the username includes the hostname, as john@hostname or john+hostname.
Google Apps uses the same method.
Kerberized Post Office Protocol
In computing, local e-mail clients can use the Kerberized Post Office Protocol (KPOP), an application-layer Internet standard protocol, to retrieve e-mail from a remote server over a TCP/IP connection. The KPOP protocol is based on the POP3 protocol – differing in that it adds Kerberos security and that it runs by default over TCP port number 1109 instead of 110. One mail server software implementation is found in the Cyrus IMAP server.
Session example
The following POP3 session dialog is an example in RFC 1939:
S: <wait for connection on TCP port 110>
C: <open connection>
S: +OK POP3 server ready <[email protected]>
C: APOP mrose c4c9334bac560ecc979e58001b3e22fb
S: +OK mrose's maildrop has 2 messages (320 octets)
C: STAT
S: +OK 2 320
C: LIST
S: +OK 2 messages (320 octets)
S: 1 120
S: 2 200
S: .
C: RETR 1
S: +OK 120 octets
S: <the POP3 server sends message 1>
S: .
C: DELE 1
S: +OK message 1 deleted
C: RETR 2
S: +OK 200 octets
S: <the POP3 server sends message 2>
S: .
C: DELE 2
S: +OK message 2 deleted
C: QUIT
S: +OK dewey POP3 server signing off (maildrop empty)
C: <close connection>
S: <wait for next connection>
POP3 servers without the optional APOP command expect the client to log in with the USER and PASS commands:
C: USER mrose
S: +OK User accepted
C: PASS tanstaaf
S: +OK Pass accepted
Server implementations
Apache James
Citadel/UX
Courier Mail Server
Cyrus IMAP server
Dovecot
Eudora Internet Mail Server
HMailServer
Ipswitch IMail Server
Kerio Connect
Mailtraq
Nginx
qmail-pop3d
Qpopper
RePOP
UW IMAP
WinGate
Zimbra
Comparison with IMAP
The Internet Message Access Protocol (IMAP) is an alternative and more recent mailbox access protocol. The highlights of differences are:
POP is a simpler protocol, making implementation easier.
POP moves the message from the email server to the local computer, although there is usually an option to leave the messages on the email server as well.
IMAP defaults to leaving the message on the email server, simply downloading a local copy.
POP treats the mailbox as a single store, and has no concept of folders
An IMAP client performs complex queries, asking the server for headers, or the bodies of specified messages, or to search for messages meeting certain criteria. Messages in the mail repository can be marked with various status flags (e.g. "deleted" or "answered") and they stay in the repository until explicitly removed by the user—which may not be until a later session. In short: IMAP is designed to permit manipulation of remote mailboxes as if they were local. Depending on the IMAP client implementation and the mail architecture desired by the system manager, the user may save messages directly on the client machine, or save them on the server, or be given the choice of doing either.
The POP protocol requires the currently connected client to be the only client connected to the mailbox. In contrast, the IMAP protocol specifically allows simultaneous access by multiple clients and provides mechanisms for clients to detect changes made to the mailbox by other, concurrently connected, clients. See for example RFC3501 section 5.2 which specifically cites "simultaneous access to the same mailbox by multiple agents" as an example.
When POP retrieves a message, it receives all parts of it, whereas the IMAP4 protocol allows clients to retrieve any of the individual MIME parts separately – for example, retrieving the plain text without retrieving attached files.
IMAP supports flags on the server to keep track of message state: for example, whether or not the message has been read, replied to, forwarded, or deleted.
Related requests for comments (RFCs)
– POST OFFICE PROTOCOL
– POST OFFICE PROTOCOL – VERSION 2
– Post Office Protocol – Version 3
– Post Office Protocol – Version 3 (STD 53)
– Some Observations on Implementations of the Post Office Protocol (POP3)
– IMAP/POP AUTHorize Extension for Simple Challenge/Response
– POP URL Scheme
– POP3 Extension Mechanism
– Using TLS with IMAP, POP3 and ACAP
– The SYS and AUTH POP Response Codes
– The Post Office Protocol (POP3) Simple Authentication and Security Layer (SASL) Authentication Mechanism
– Cleartext Considered Obsolete: Use of Transport Layer Security (TLS) for Email Submission and Access
See also
Email encryption
Internet Message Access Protocol
References
Further reading
External links
IANA port number assignments
POP3 Sequence Diagram (PDF)
Internet mail protocols |
23080 | https://en.wikipedia.org/wiki/Pretty%20Good%20Privacy | Pretty Good Privacy | Pretty Good Privacy (PGP) is an encryption program that provides cryptographic privacy and authentication for data communication. PGP is used for signing, encrypting, and decrypting texts, e-mails, files, directories, and whole disk partitions and to increase the security of e-mail communications. Phil Zimmermann developed PGP in 1991.
PGP and similar software follow the OpenPGP, an open standard of PGP encryption software, standard (RFC 4880) for encrypting and decrypting data.
Design
PGP encryption uses a serial combination of hashing, data compression, symmetric-key cryptography, and finally public-key cryptography; each step uses one of several supported algorithms. Each public key is bound to a username or an e-mail address. The first version of this system was generally known as a web of trust to contrast with the X.509 system, which uses a hierarchical approach based on certificate authority and which was added to PGP implementations later. Current versions of PGP encryption include options through an automated key management server.
PGP fingerprint
A public key fingerprint is a shorter version of a public key. From a fingerprint, someone can validate the correct corresponding public key. A fingerprint like C3A6 5E46 7B54 77DF 3C4C 9790 4D22 B3CA 5B32 FF66 can be printed on a business card.
Compatibility
As PGP evolves, versions that support newer features and algorithms can create encrypted messages that older PGP systems cannot decrypt, even with a valid private key. Therefore, it is essential that partners in PGP communication understand each other's capabilities or at least agree on PGP settings.
Confidentiality
PGP can be used to send messages confidentially. For this, PGP uses a hybrid cryptosystem by combining symmetric-key encryption and public-key encryption. The message is encrypted using a symmetric encryption algorithm, which requires a symmetric key generated by the sender. The symmetric key is used only once and is also called a session key. The message and its session key are sent to the receiver. The session key must be sent to the receiver so they know how to decrypt the message, but to protect it during transmission it is encrypted with the receiver's public key. Only the private key belonging to the receiver can decrypt the session key, and use it to symmetrically decrypt the message.
Digital signatures
PGP supports message authentication and integrity checking. The latter is used to detect whether a message has been altered since it was completed (the message integrity property) and the former, to determine whether it was actually sent by the person or entity claimed to be the sender (a digital signature). Because the content is encrypted, any changes in the message will fail the decryption with the appropriate key. The sender uses PGP to create a digital signature for the message with either the RSA or DSA algorithms. To do so, PGP computes a hash (also called a message digest) from the plaintext and then creates the digital signature from that hash using the sender's private key.
Web of trust
Both when encrypting messages and when verifying signatures, it is critical that the public key used to send messages to someone or some entity actually does 'belong' to the intended recipient. Simply downloading a public key from somewhere is not a reliable assurance of that association; deliberate (or accidental) impersonation is possible. From its first version, PGP has always included provisions for distributing user's public keys in an 'identity certification', which is also constructed cryptographically so that any tampering (or accidental garble) is readily detectable. However, merely making a certificate that is impossible to modify without being detected is insufficient; this can prevent corruption only after the certificate has been created, not before. Users must also ensure by some means that the public key in a certificate actually does belong to the person or entity claiming it. A given public key (or more specifically, information binding a user name to a key) may be digitally signed by a third-party user to attest to the association between someone (actually a user name) and the key. There are several levels of confidence that can be included in such signatures. Although many programs read and write this information, few (if any) include this level of certification when calculating whether to trust a key.
The web of trust protocol was first described by Phil Zimmermann in 1992, in the manual for PGP version 2.0:
The web of trust mechanism has advantages over a centrally managed public key infrastructure scheme such as that used by S/MIME but has not been universally used. Users have to be willing to accept certificates and check their validity manually or have to simply accept them. No satisfactory solution has been found for the underlying problem.
Certificates
In the (more recent) OpenPGP specification, trust signatures can be used to support creation of certificate authorities. A trust signature indicates both that the key belongs to its claimed owner and that the owner of the key is trustworthy to sign other keys at one level below their own. A level 0 signature is comparable to a web of trust signature since only the validity of the key is certified. A level 1 signature is similar to the trust one has in a certificate authority because a key signed to level 1 is able to issue an unlimited number of level 0 signatures. A level 2 signature is highly analogous to the trust assumption users must rely on whenever they use the default certificate authority list (like those included in web browsers); it allows the owner of the key to make other keys certificate authorities.
PGP versions have always included a way to cancel ('revoke') public key certificates. A lost or compromised private key will require this if communication security is to be retained by that user. This is, more or less, equivalent to the certificate revocation lists of centralised PKI schemes. Recent PGP versions have also supported certificate expiration dates.
The problem of correctly identifying a public key as belonging to a particular user is not unique to PGP. All public key/private key cryptosystems have the same problem, even if in slightly different guises, and no fully satisfactory solution is known. PGP's original scheme at least leaves the decision as to whether or not to use its endorsement/vetting system to the user, while most other PKI schemes do not, requiring instead that every certificate attested to by a central certificate authority be accepted as correct.
Security quality
To the best of publicly available information, there is no known method which will allow a person or group to break PGP encryption by cryptographic, or computational means. Indeed, in 1995, cryptographer Bruce Schneier characterized an early version as being "the closest you're likely to get to military-grade encryption." Early versions of PGP have been found to have theoretical vulnerabilities and so current versions are recommended. In addition to protecting data in transit over a network, PGP encryption can also be used to protect data in long-term data storage such as disk files. These long-term storage options are also known as data at rest, i.e. data stored, not in transit.
The cryptographic security of PGP encryption depends on the assumption that the algorithms used are unbreakable by direct cryptanalysis with current equipment and techniques.
In the original version, the RSA algorithm was used to encrypt session keys. RSA's security depends upon the one-way function nature of mathematical integer factoring. Similarly, the symmetric key algorithm used in PGP version 2 was IDEA, which might at some point in the future be found to have previously undetected cryptanalytic flaws. Specific instances of current PGP or IDEA insecurities (if they exist) are not publicly known. As current versions of PGP have added additional encryption algorithms, their cryptographic vulnerability varies with the algorithm used. However, none of the algorithms in current use are publicly known to have cryptanalytic weaknesses.
New versions of PGP are released periodically and vulnerabilities fixed by developers as they come to light. Any agency wanting to read PGP messages would probably use easier means than standard cryptanalysis, e.g. rubber-hose cryptanalysis or black-bag cryptanalysis (e.g. installing some form of trojan horse or keystroke logging software/hardware on the target computer to capture encrypted keyrings and their passwords). The FBI has already used this attack against PGP in its investigations. However, any such vulnerabilities apply not just to PGP but to any conventional encryption software.
In 2003, an incident involving seized Psion PDAs belonging to members of the Red Brigade indicated that neither the Italian police nor the FBI were able to decrypt PGP-encrypted files stored on them.
A second incident in December 2006, (see In re Boucher), involving US customs agents who seized a laptop PC that allegedly contained child pornography, indicates that US government agencies find it "nearly impossible" to access PGP-encrypted files. Additionally, a magistrate judge ruling on the case in November 2007 has stated that forcing the suspect to reveal his PGP passphrase would violate his Fifth Amendment rights i.e. a suspect's constitutional right not to incriminate himself. The Fifth Amendment issue was opened again as the government appealed the case, after which a federal district judge ordered the defendant to provide the key.
Evidence suggests that , British police investigators are unable to break PGP, so instead have resorted to using RIPA legislation to demand the passwords/keys. In November 2009 a British citizen was convicted under RIPA legislation and jailed for nine months for refusing to provide police investigators with encryption keys to PGP-encrypted files.
PGP as a cryptosystem has been criticized for complexity of the standard, implementation and very low usability of the user interface including by recognized figures in cryptography research. It uses an ineffective serialization format for storage of both keys and encrypted data, which resulted in signature-spamming attacks on public keys of prominent developers of GNU Privacy Guard. Backwards compatibility of the OpenPGP standard results in usage of relatively weak default choices of cryptographic primitives (CAST5 cipher, CFB mode, S2K password hashing). The standard has been also criticized for leaking metadata, usage of long-term keys and lack of forward secrecy. Popular end-user implementations have suffered from various signature-striping, cipher downgrade and metadata leakage vulnerabilities which have been attributed to the complexity of the standard.
History
Early history
Phil Zimmermann created the first version of PGP encryption in 1991. The name, "Pretty Good Privacy" was inspired by the name of a grocery store, "Ralph's Pretty Good Grocery", featured in radio host Garrison Keillor's fictional town, Lake Wobegon. This first version included a symmetric-key algorithm that Zimmermann had designed himself, named BassOmatic after a Saturday Night Live sketch. Zimmermann had been a long-time anti-nuclear activist, and created PGP encryption so that similarly inclined people might securely use BBSs and securely store messages and files. No license fee was required for its non-commercial use, and the complete source code was included with all copies.
In a posting of June 5, 2001, entitled "PGP Marks 10th Anniversary", Zimmermann describes the circumstances surrounding his release of PGP:
PGP found its way onto the Internet and rapidly acquired a considerable following around the world. Users and supporters included dissidents in totalitarian countries (some affecting letters to Zimmermann have been published, some of which have been included in testimony before the US Congress), civil libertarians in other parts of the world (see Zimmermann's published testimony in various hearings), and the 'free communications' activists who called themselves cypherpunks (who provided both publicity and distribution); decades later, CryptoParty activists did much the same via Twitter.
Criminal investigation
Shortly after its release, PGP encryption found its way outside the United States, and in February 1993 Zimmermann became the formal target of a criminal investigation by the US Government for "munitions export without a license". At the time, cryptosystems using keys larger than 40 bits were considered munitions within the definition of the US export regulations; PGP has never used keys smaller than 128 bits, so it qualified at that time. Penalties for violation, if found guilty, were substantial. After several years, the investigation of Zimmermann was closed without filing criminal charges against him or anyone else.
Zimmermann challenged these regulations in an imaginative way. He published the entire source code of PGP in a hardback book, via MIT Press, which was distributed and sold widely. Anybody wishing to build their own copy of PGP could cut off the covers, separate the pages, and scan them using an OCR program (or conceivably enter it as a type-in program if OCR software was not available), creating a set of source code text files. One could then build the application using the freely available GNU Compiler Collection. PGP would thus be available anywhere in the world. The claimed principle was simple: export of munitions—guns, bombs, planes, and software—was (and remains) restricted; but the export of books is protected by the First Amendment. The question was never tested in court with respect to PGP. In cases addressing other encryption software, however, two federal appeals courts have established the rule that cryptographic software source code is speech protected by the First Amendment (the Ninth Circuit Court of Appeals in the Bernstein case and the Sixth Circuit Court of Appeals in the Junger case).
US export regulations regarding cryptography remain in force, but were liberalized substantially throughout the late 1990s. Since 2000, compliance with the regulations is also much easier. PGP encryption no longer meets the definition of a non-exportable weapon, and can be exported internationally except to seven specific countries and a list of named groups and individuals (with whom substantially all US trade is prohibited under various US export controls).
PGP 3 and founding of PGP Inc.
During this turmoil, Zimmermann's team worked on a new version of PGP encryption called PGP 3. This new version was to have considerable security improvements, including a new certificate structure that fixed small security flaws in the PGP 2.x certificates as well as permitting a certificate to include separate keys for signing and encryption. Furthermore, the experience with patent and export problems led them to eschew patents entirely. PGP 3 introduced the use of the CAST-128 (a.k.a. CAST5) symmetric key algorithm, and the DSA and ElGamal asymmetric key algorithms, all of which were unencumbered by patents.
After the Federal criminal investigation ended in 1996, Zimmermann and his team started a company to produce new versions of PGP encryption. They merged with Viacrypt (to whom Zimmermann had sold commercial rights and who had licensed RSA directly from RSADSI), which then changed its name to PGP Incorporated. The newly combined Viacrypt/PGP team started work on new versions of PGP encryption based on the PGP 3 system. Unlike PGP 2, which was an exclusively command line program, PGP 3 was designed from the start as a software library allowing users to work from a command line or inside a GUI environment. The original agreement between Viacrypt and the Zimmermann team had been that Viacrypt would have even-numbered versions and Zimmermann odd-numbered versions. Viacrypt, thus, created a new version (based on PGP 2) that they called PGP 4. To remove confusion about how it could be that PGP 3 was the successor to PGP 4, PGP 3 was renamed and released as PGP 5 in May 1997.
Network Associates acquisition
In December 1997, PGP Inc. was acquired by Network Associates, Inc. ("NAI"). Zimmermann and the PGP team became NAI employees. NAI was the first company to have a legal export strategy by publishing source code. Under NAI, the PGP team added disk encryption, desktop firewalls, intrusion detection, and IPsec VPNs to the PGP family. After the export regulation liberalizations of 2000 which no longer required publishing of source, NAI stopped releasing source code.
In early 2001, Zimmermann left NAI. He served as Chief Cryptographer for Hush Communications, who provide an OpenPGP-based e-mail service, Hushmail. He has also worked with Veridis and other companies. In October 2001, NAI announced that its PGP assets were for sale and that it was suspending further development of PGP encryption. The only remaining asset kept was the PGP E-Business Server (the original PGP Commandline version). In February 2002, NAI canceled all support for PGP products, with the exception of the renamed commandline product. NAI (formerly McAfee, then Intel Security, and now McAfee again) continued to sell and support the product under the name McAfee E-Business Server until 2013.
PGP Corporation and Symantec
In August 2002, several ex-PGP team members formed a new company, PGP Corporation, and bought the PGP assets (except for the command line version) from NAI. The new company was funded by Rob Theis of Doll Capital Management (DCM) and Terry Garnett of Venrock Associates. PGP Corporation supported existing PGP users and honored NAI's support contracts. Zimmermann served as a special advisor and consultant to PGP Corporation while continuing to run his own consulting company. In 2003, PGP Corporation created a new server-based product called PGP Universal. In mid-2004, PGP Corporation shipped its own command line version called PGP Command Line, which integrated with the other PGP Encryption Platform applications. In 2005, PGP Corporation made its first acquisition: the German software company Glück & Kanja Technology AG, which became PGP Deutschland AG. In 2010, PGP Corporation acquired Hamburg-based certificate authority TC TrustCenter and its parent company, ChosenSecurity, to form its PGP TrustCenter division.
After the 2002 purchase of NAI's PGP assets, PGP Corporation offered worldwide PGP technical support from its offices in Draper, Utah; Offenbach, Germany; and Tokyo, Japan.
On April 29, 2010, Symantec Corp. announced that it would acquire PGP for $300 million with the intent of integrating it into its Enterprise Security Group. This acquisition was finalized and announced to the public on June 7, 2010. The source code of PGP Desktop 10 is available for peer review.
Also in 2010, Intel Corporation acquired McAfee. In 2013, the McAfee E-Business Server was transferred to Software Diversified Services, which now sells, supports, and develops it under the name SDS E-Business Server.
For the enterprise, Townsend Security currently offers a commercial version of PGP for the IBM i and IBM z mainframe platforms. Townsend Security partnered with Network Associates in 2000 to create a compatible version of PGP for the IBM i platform. Townsend Security again ported PGP in 2008, this time to the IBM z mainframe. This version of PGP relies on a free z/OS encryption facility, which utilizes hardware acceleration. Software Diversified Services also offers a commercial version of PGP (SDS E-Business Server) for the IBM z mainframe.
In May 2018, a bug named EFAIL was discovered in certain implementations of PGP which from 2003 could reveal the plaintext contents of emails encrypted with it. The chosen mitigation for this vulnerability in PGP Desktop is to mandate the use SEIP protected packets in the ciphertext, which can lead to old emails or other encrypted objects to be no longer decryptable after upgrading to the software version that has the mitigation.
PGP Corporation encryption applications
This section describes commercial programs available from PGP Corporation. For information on other programs compatible with the OpenPGP specification, see External links below.
While originally used primarily for encrypting the contents of e-mail messages and attachments from a desktop client, PGP products have been diversified since 2002 into a set of encryption applications that can be managed by an optional central policy server. PGP encryption applications include e-mails and attachments, digital signatures, laptop full disk encryption, file and folder security, protection for IM sessions, batch file transfer encryption, and protection for files and folders stored on network servers and, more recently, encrypted or signed HTTP request/responses by means of a client-side (Enigform) and a server-side (mod openpgp) module. There is also a WordPress plugin available, called wp-enigform-authentication, that takes advantage of the session management features of Enigform with mod_openpgp.
The PGP Desktop 9.x family includes PGP Desktop Email, PGP Whole Disk Encryption, and PGP NetShare. Additionally, a number of Desktop bundles are also available. Depending on the application, the products feature desktop e-mail, digital signatures, IM security, whole disk encryption, file, and folder security, encrypted self-extracting archives, and secure shredding of deleted files. Capabilities are licensed in different ways depending on the features required.
The PGP Universal Server 2.x management console handles centralized deployment, security policy, policy enforcement, key management, and reporting. It is used for automated e-mail encryption in the gateway and manages PGP Desktop 9.x clients. In addition to its local keyserver, PGP Universal Server works with the PGP public keyserver—called the PGP Global Directory—to find recipient keys. It has the capability of delivering e-mail securely when no recipient key is found via a secure HTTPS browser session.
With PGP Desktop 9.x managed by PGP Universal Server 2.x, first released in 2005, all PGP encryption applications are based on a new proxy-based architecture. These newer versions of PGP software eliminate the use of e-mail plug-ins and insulate the user from changes to other desktop applications. All desktop and server operations are now based on security policies and operate in an automated fashion. The PGP Universal server automates the creation, management, and expiration of keys, sharing these keys among all PGP encryption applications.
The Symantec PGP platform has now undergone a rename. PGP Desktop is now known as Symantec Encryption Desktop (SED), and the PGP Universal Server is now known as Symantec Encryption Management Server (SEMS). The current shipping versions are Symantec Encryption Desktop 10.3.0 (Windows and macOS platforms) and Symantec Encryption Server 3.3.2.
Also available are PGP Command-Line, which enables command line-based encryption and signing of information for storage, transfer, and backup, as well as the PGP Support Package for BlackBerry which enables RIM BlackBerry devices to enjoy sender-to-recipient messaging encryption.
New versions of PGP applications use both OpenPGP and the S/MIME, allowing communications with any user of a NIST specified standard.
OpenPGP
Within PGP Inc., there was still concern surrounding patent issues. RSADSI was challenging the continuation of the Viacrypt RSA license to the newly merged firm. The company adopted an informal internal standard that they called "Unencumbered PGP" which would "use no algorithm with licensing difficulties". Because of PGP encryption's importance worldwide, many wanted to write their own software that would interoperate with PGP 5. Zimmermann became convinced that an open standard for PGP encryption was critical for them and for the cryptographic community as a whole. In July 1997, PGP Inc. proposed to the IETF that there be a standard called OpenPGP. They gave the IETF permission to use the name OpenPGP to describe this new standard as well as any program that supported the standard. The IETF accepted the proposal and started the OpenPGP Working Group.
OpenPGP is on the Internet Standards Track and is under active development. Many e-mail clients provide OpenPGP-compliant email security as described in RFC 3156. The current specification is RFC 4880 (November 2007), the successor to RFC 2440. RFC 4880 specifies a suite of required algorithms consisting of ElGamal encryption, DSA, Triple DES and SHA-1. In addition to these algorithms, the standard recommends RSA as described in PKCS #1 v1.5 for encryption and signing, as well as AES-128, CAST-128 and IDEA. Beyond these, many other algorithms are supported. The standard was extended to support Camellia cipher by RFC 5581 in 2009, and signing and key exchange based on Elliptic Curve Cryptography (ECC) (i.e. ECDSA and ECDH) by RFC 6637 in 2012. Support for ECC encryption was added by the proposed RFC 4880bis in 2014.
The Free Software Foundation has developed its own OpenPGP-compliant software suite called GNU Privacy Guard, freely available together with all source code under the GNU General Public License and is maintained separately from several graphical user interfaces that interact with the GnuPG library for encryption, decryption, and signing functions (see KGPG, Seahorse, MacGPG). Several other vendors have also developed OpenPGP-compliant software.
The development of an open source OpenPGP-compliant library, OpenPGP.js, written in JavaScript and supported by the Horizon 2020 Framework Programme of the European Union, has allowed web-based applications to use PGP encryption in the web browser.
PGP
PGP Message Exchange Formats (obsolete)
OpenPGP
OpenPGP Message Format (obsolete)
OpenPGP Message Format
The Camellia Cipher in OpenPGP
Elliptic Curve Cryptography (ECC) in OpenPGP
draft-ietf-openpgp-crypto-refresh OpenPGP Message Format
PGP/MIME
MIME Security with Pretty Good Privacy (PGP)
MIME Security with OpenPGP
OpenPGP's encryption can ensure the secure delivery of files and messages, as well as provide verification of who created or sent the message using a process called digital signing. The open source office suite LibreOffice implemented document signing with OpenPGP as of version 5.4.0 on Linux. Using OpenPGP for communication requires participation by both the sender and recipient. OpenPGP can also be used to secure sensitive files when they're stored in vulnerable places like mobile devices or in the cloud.
Limitations
With the advancement of cryptography, parts of PGP have been criticized for being dated:
The long length of PGP public keys
Difficulty for the users to comprehend and poor usability
Lack of ubiquity
Lack of forward secrecy
In October 2017, the ROCA vulnerability was announced, which affects RSA keys generated by buggy Infineon firmware used on Yubikey 4 tokens, often used with PGP. Many published PGP keys were found to be susceptible. Yubico offers free replacement of affected tokens.
See also
Bernstein v. United States
Electronic envelope
Email encryption
Email privacy
GNU Privacy Guard
Gpg4win
Key server (cryptographic)
PGP word list
PGPDisk
Pretty Easy privacy
Privacy
Public-key cryptography
S/MIME
X.509
ZRTP
References
Further reading
External links
OpenPGP::SDK
MIT Public Key Directory for Registration and Search
List of public keyservers
IETF OpenPGP working group
OpenPGP Alliance
1991 software
Cryptographic software
History of cryptography
Internet privacy software
OpenPGP
Privacy software
Encryption debate |
23486 | https://en.wikipedia.org/wiki/Phil%20Zimmermann | Phil Zimmermann | Philip R. Zimmermann (born 1954) is an American computer scientist and cryptographer. He is the creator of Pretty Good Privacy (PGP), the most widely used email encryption software in the world. He is also known for his work in VoIP encryption protocols, notably ZRTP and Zfone. Zimmermann is co-founder and Chief Scientist of the global encrypted communications firm Silent Circle.
Background
He was born in Camden, New Jersey. Zimmermann received a B.S. degree in computer science from Florida Atlantic University in Boca Raton, Florida in 1978. In the 1980s, Zimmermann worked in Boulder, Colorado as a software engineer on the Nuclear Weapons Freeze Campaign as a military policy analyst.
PGP
In 1991, he wrote the popular Pretty Good Privacy (PGP) program, and made it available (together with its source code) through public FTP for download, the first widely available program implementing public-key cryptography. Shortly thereafter, it became available overseas via the Internet, though Zimmermann has said he had no part in its distribution outside the United States.
The very first version of PGP included an encryption algorithm, BassOmatic, developed by Zimmermann.
Arms Export Control Act investigation
After a report from RSA Security, who were in a licensing dispute with regard to the use of the RSA algorithm in PGP, the United States Customs Service started a criminal investigation of Zimmermann, for allegedly violating the Arms Export Control Act. The United States Government had long regarded cryptographic software as a munition, and thus subject to arms trafficking export controls. At that time, PGP was considered to be impermissible ("high-strength") for export from the United States. The maximum strength allowed for legal export has since been raised and now allows PGP to be exported. The investigation lasted three years, but was finally dropped without filing charges after MIT Press published the source code of PGP.
In 1995, Zimmermann published the book PGP Source Code and Internals as a way to bypass limitations on exporting digital code. Zimmermann's introduction says the book contains "all of the C source code to a software package called PGP" and that the unusual publication in book form of the complete source code for a computer program was a direct response to the U.S. government's criminal investigation of Zimmermann for violations of U.S. export restrictions as a result of the international spread of PGP's use.
After the government dropped its case without indictment in early 1996, Zimmermann founded PGP Inc. and released an updated version of PGP and some additional related products. That company was acquired by Network Associates (NAI) in December 1997, and Zimmermann stayed on for three years as a Senior Fellow. NAI decided to drop the product line and in 2002, PGP was acquired from NAI by a new company called PGP Corporation. Zimmermann served as a special advisor and consultant to that firm until Symantec acquired PGP Corporation in 2010. Zimmermann is also a fellow at the Stanford Law School's Center for Internet and Society. He was a principal designer of the cryptographic key agreement protocol (the "association model") for the Wireless USB standard.
Silent Circle
Along with Mike Janke and Jon Callas, in 2012 he co-founded Silent Circle, a secure hardware and subscription based software security company.
Dark Mail Alliance
In October 2013, Zimmermann, along with other key employees from Silent Circle, teamed up with Lavabit founder Ladar Levison to create the Dark Mail Alliance. The goal of the organization is to work on a new protocol to replace PGP that will encrypt email metadata, among other things that PGP is not capable of.
Okuna
Zimmermann is also involved in the social network Okuna, formerly Openbook, which aims to be an ethical and privacy-friendly alternative to existing social networks, especially Facebook. He sees today's established social media platforms as a threat to democracy and privacy, because of their profit-oriented revenue models that "are all about exploiting our personal information" and "[deepen] the political divides in our culture", and Okuna as the solution to these problems.
Zimmermann's Law
In 2013, an article on "Zimmermann's Law" quoted Phil Zimmermann as saying "The natural flow of technology tends to move in the direction of making surveillance easier", and "the ability of computers to track us doubles every eighteen months", in reference to Moore's law.
Awards and other recognition
Zimmermann has received numerous technical and humanitarian awards for his pioneering work in cryptography:
In 2018, Zimmermann was inducted into Information Systems Security Association (ISSA) hall of fame by the ISSA International Organization on October 16, 2018.
In 2012, Zimmermann was inducted into the Internet Hall of Fame by the Internet Society.
In 2008, PC World named Zimmermann one of the "Top 50 Tech Visionaries" of the last 50 years.
In 2006, eWeek ranked PGP 9th in the 25 Most Influential and Innovative Products introduced since the invention of the PC in 1981.
In 2003, Reason named him a "Hero of Freedom"
In 2001, Zimmermann was inducted into the CRN Industry Hall of Fame.
In 2000, InfoWorld named him one of the "Top 10 Innovators in E-business".
In 1999, he received the Louis Brandeis Award from Privacy International.
In 1998, he received a Lifetime Achievement Award from Secure Computing Magazine.
In 1996, he received the Norbert Wiener Award for Social and Professional Responsibility for promoting the responsible use of technology.
In 1996, he received the Thomas S. Szasz Award for Outstanding Contributions to the Cause of Civil Liberties from the Center for Independent Thought.
In 1995, he received the Chrysler Design Award for Innovation, and the Pioneer Award from the Electronic Frontier Foundation.
In 1995, Newsweek also named Zimmermann one of the "Net 50", the 50 most influential people on the Internet.
Simon Singh's The Code Book devotes an entire chapter to Zimmermann and PGP.
Publications
The Official PGP User's Guide, MIT Press, 1995
PGP Source Code and Internals, MIT Press, 1995
See also
Data privacy
GNU Privacy Guard
Information privacy
Information security
PGPfone
PGP word list
References
External links
Why I wrote PGP
Conversation With Phil Zimmermann, Mikael Pawlo, GrepLaw, June 6, 2003.
E-mail security hero takes on VoIP, Declan McCullagh, C|net, 15 August 2006.
VON Pioneers: Philip Zimmermann Encrypts VoIP, VON Magazine, Jan 2007.
Silent Circle – Global Encrypted Communications Service
1954 births
Living people
American people of German descent
Cypherpunks
Modern cryptographers
American cryptographers
Public-key cryptographers
People from Camden, New Jersey
Florida Atlantic University alumni
Privacy activists
American human rights activists
American technology company founders |
23511 | https://en.wikipedia.org/wiki/Point-to-Point%20Protocol | Point-to-Point Protocol | In computer networking, Point-to-Point Protocol (PPP) is a data link layer (layer 2) communication protocol between two routers directly without any host or any other networking in between. It can provide connection authentication, transmission encryption, and data compression.
PPP is used over many types of physical networks, including serial cable, phone line, trunk line, cellular telephone, specialized radio links, ISDN, and fiber optic links such as SONET. Since IP packets cannot be transmitted over a modem line on their own without some data link protocol that can identify where the transmitted frame starts and where it ends, Internet service providers (ISPs) have used PPP for customer dial-up access to the Internet.
Two derivatives of PPP, Point-to-Point Protocol over Ethernet (PPPoE) and Point-to-Point Protocol over ATM (PPPoA), are used most commonly by ISPs to establish a digital subscriber line (DSL) Internet service connection with customers.
Description
PPP is commonly used as a data link layer protocol for connection over synchronous and asynchronous circuits, where it has largely superseded the older Serial Line Internet Protocol (SLIP) and telephone company mandated standards (such as Link Access Protocol, Balanced (LAPB) in the X.25 protocol suite). The only requirement for PPP is that the circuit provided be duplex. PPP was designed to work with numerous network layer protocols, including Internet Protocol (IP), TRILL, Novell's Internetwork Packet Exchange (IPX), NBF, DECnet and AppleTalk. Like SLIP, this is a full Internet connection over telephone lines via modem. It is more reliable than SLIP because it double checks to make sure that Internet packets arrive intact. It resends any damaged packets.
PPP was designed somewhat after the original HDLC specifications. The designers of PPP included many additional features that had been seen only in proprietary data-link protocols up to that time. PPP is specified in RFC 1661.
RFC 2516 describes Point-to-Point Protocol over Ethernet (PPPoE) as a method for transmitting PPP over Ethernet that is sometimes used with DSL. RFC 2364 describes Point-to-Point Protocol over ATM (PPPoA) as a method for transmitting PPP over ATM Adaptation Layer 5 (AAL5), which is also a common alternative to PPPoE used with DSL.
PPP, PPPoE and PPPoA are widely used in WAN lines.
PPP is a layered protocol that has three components:
An encapsulation component that is used to transmit datagrams over the specified physical layer.
A Link Control Protocol (LCP) to establish, configure, and test the link as well as negotiate settings, options and the use of features.
One or more Network Control Protocols (NCP) used to negotiate optional configuration parameters and facilities for the network layer. There is one NCP for each higher-layer protocol supported by PPP.
Automatic self configuration
LCP initiates and terminates connections gracefully, allowing hosts to negotiate connection options. It is an integral part of PPP, and is defined in the same standard specification. LCP provides automatic configuration of the interfaces at each end (such as setting datagram size, escaped characters, and magic numbers) and for selecting optional authentication. The LCP protocol runs on top of PPP (with PPP protocol number 0xC021) and therefore a basic PPP connection has to be established before LCP is able to configure it.
RFC 1994 describes Challenge-Handshake Authentication Protocol (CHAP), which is preferred for establishing dial-up connections with ISPs.
Although deprecated, Password Authentication Protocol (PAP) is still sometimes used.
Another option for authentication over PPP is Extensible Authentication Protocol (EAP) described in RFC 2284.
After the link has been established, additional network (layer 3) configuration may take place. Most commonly, the Internet Protocol Control Protocol (IPCP) is used, although Internetwork Packet Exchange Control Protocol (IPXCP) and AppleTalk Control Protocol (ATCP) were once popular. Internet Protocol Version 6 Control Protocol (IPv6CP) will see extended use in the future, when IPv6 replaces IPv4 as the dominant layer-3 protocol.
Multiple network layer protocols
PPP permits multiple network layer protocols to operate on the same communication link. For every network layer protocol used, a separate Network Control Protocol (NCP) is provided in order to encapsulate and negotiate options for the multiple network layer protocols. It negotiates network-layer information, e.g. network address or compression options, after the connection has been established.
For example, IP uses IPCP, and Internetwork Packet Exchange (IPX) uses the Novell IPX Control Protocol (IPX/SPX). NCPs include fields containing standardized codes to indicate the network layer protocol type that the PPP connection encapsulates.
The following NCPs may be used with PPP:
IPCP for IP, protocol code number 0x8021, RFC 1332
the OSI Network Layer Control Protocol (OSINLCP) for the various OSI network layer protocols, protocol code number 0x8023, RFC 1377
the AppleTalk Control Protocol (ATCP) for AppleTalk, protocol code number 0x8029, RFC 1378
the Internetwork Packet Exchange Control Protocol (IPXCP) for the Internet Packet Exchange, protocol code number 0x802B, RFC 1552
the DECnet Phase IV Control Protocol (DNCP) for DNA Phase IV Routing protocol (DECnet Phase IV), protocol code number 0x8027, RFC 1762
the NetBIOS Frames Control Protocol (NBFCP) for the NetBIOS Frames protocol (or NetBEUI as it was called before that), protocol code number 0x803F, RFC 2097
the IPv6 Control Protocol (IPV6CP) for IPv6, protocol code number 0x8057, RFC 5072
Looped link detection
PPP detects looped links using a feature involving magic numbers. When the node sends PPP LCP messages, these messages may include a magic number. If a line is looped, the node receives an LCP message with its own magic number, instead of getting a message with the peer's magic number.
Configuration options
The previous section introduced the use of LCP options to meet specific WAN connection requirements. PPP may include the following LCP options:
Authentication - Peer routers exchange authentication messages. Two authentication choices are Password Authentication Protocol (PAP) and Challenge Handshake Authentication Protocol (CHAP). Authentication is explained in the next section.
Compression - Increases the effective throughput on PPP connections by reducing the amount of data in the frame that must travel across the link. The protocol decompresses the frame at its destination. See RFC 1962 for more details.
Error detection - Identifies fault conditions. The Quality and Magic Number options help ensure a reliable, loop-free data link. The Magic Number field helps in detecting links that are in a looped-back condition. Until the Magic-Number Configuration Option has been successfully negotiated, the Magic-Number must be transmitted as zero. Magic numbers are generated randomly at each end of the connection.
Multilink - Provides load balancing several interfaces used by PPP through Multilink PPP (see below).
PPP frame
Structure
PPP frames are variants of HDLC frames:
If both peers agree to Address field and Control field compression during LCP, then those fields are omitted. Likewise if both peers agree to Protocol field compression, then the 0x00 byte can be omitted.
The Protocol field indicates the type of payload packet: 0xC021 for LCP, 0x80xy for various NCPs, 0x0021 for IP, 0x0029 AppleTalk, 0x002B for IPX, 0x003D for Multilink, 0x003F for NetBIOS, 0x00FD for MPPC and MPPE, etc. PPP is limited, and cannot contain general Layer 3 data, unlike EtherType.
The Information field contains the PPP payload; it has a variable length with a negotiated maximum called the Maximum Transmission Unit. By default, the maximum is 1500 octets. It might be padded on transmission; if the information for a particular protocol can be padded, that protocol must allow information to be distinguished from padding.
Encapsulation
PPP frames are encapsulated in a lower-layer protocol that provides framing and may provide other functions such as a checksum to detect transmission errors. PPP on serial links is usually encapsulated in a framing similar to HDLC, described by IETF RFC 1662.
The Flag field is present when PPP with HDLC-like framing is used.
The Address and Control fields always have the value hex FF (for "all stations") and hex 03 (for "unnumbered information"), and can be omitted whenever PPP LCP Address-and-Control-Field-Compression (ACFC) is negotiated.
The frame check sequence (FCS) field is used for determining whether an individual frame has an error. It contains a checksum computed over the frame to provide basic protection against errors in transmission. This is a CRC code similar to the one used for other layer two protocol error protection schemes such as the one used in Ethernet. According to RFC 1662, it can be either 16 bits (2 bytes) or 32 bits (4 bytes) in size (default is 16 bits - Polynomial x16 + x12 + x5 + 1).
The FCS is calculated over the Address, Control, Protocol, Information and Padding fields after the message has been encapsulated.
Line activation and phases
Link Dead This phase occurs when the link fails, or one side has been told to disconnect (e.g. a user has finished his or her dialup connection.)
Link Establishment Phase This phase is where Link Control Protocol negotiation is attempted. If successful, control goes either to the authentication phase or the Network-Layer Protocol phase, depending on whether authentication is desired.
Authentication Phase This phase is optional. It allows the sides to authenticate each other before a connection is established. If successful, control goes to the network-layer protocol phase.
Network-Layer Protocol Phase This phase is where each desired protocols' Network Control Protocols are invoked. For example, IPCP is used in establishing IP service over the line. Data transport for all protocols which are successfully started with their network control protocols also occurs in this phase. Closing down of network protocols also occur in this phase.
Link Termination Phase This phase closes down this connection. This can happen if there is an authentication failure, if there are so many checksum errors that the two parties decide to tear down the link automatically, if the link suddenly fails, or if the user decides to hang up a connection.
Over several links
Multilink PPP
Multilink PPP (also referred to as MLPPP, MP, MPPP, MLP, or Multilink) provides a method for spreading traffic across multiple distinct PPP connections. It is defined in RFC 1990. It can be used, for example, to connect a home computer to an Internet Service Provider using two traditional 56k modems, or to connect a company through two leased lines.
On a single PPP line frames cannot arrive out of order, but this is possible when the frames are divided among multiple PPP connections. Therefore, Multilink PPP must number the fragments so they can be put in the right order again when they arrive.
Multilink PPP is an example of a link aggregation technology. Cisco IOS Release 11.1 and later supports Multilink PPP.
Multiclass PPP
With PPP, one cannot establish several simultaneous distinct PPP connections over a single link.
That's not possible with Multilink PPP either. Multilink PPP uses contiguous numbers for all the fragments of a packet, and as a consequence it is not possible to suspend the sending of a sequence of fragments of one packet in order to send another packet. This prevents from running Multilink PPP multiple times on the same links.
Multiclass PPP is a kind of Multilink PPP where each "class" of traffic uses a separate sequence number space and reassembly buffer. Multiclass PPP is defined in RFC 2686
Tunnels
Derived protocols
PPTP (Point-to-Point Tunneling Protocol) is a form of PPP between two hosts via GRE using encryption (MPPE) and compression (MPPC).
As a layer 2 protocol between both ends of a tunnel
Many protocols can be used to tunnel data over IP networks. Some of them, like SSL, SSH, or L2TP create virtual network interfaces and give the impression of direct physical connections between the tunnel endpoints. On a Linux host for example, these interfaces would be called tun0 or ppp0.
As there are only two endpoints on a tunnel, the tunnel is a point-to-point connection and PPP is a natural choice as a data link layer protocol between the virtual network interfaces. PPP can assign IP addresses to these virtual interfaces, and these IP addresses can be used, for example, to route between the networks on both sides of the tunnel.
IPsec in tunneling mode does not create virtual physical interfaces at the end of the tunnel, since the tunnel is handled directly by the TCP/IP stack. L2TP can be used to provide these interfaces, this technique is called L2TP/IPsec. In this case too, PPP provides IP addresses to the extremities of the tunnel.
IETF standards
PPP is defined in RFC 1661 (The Point-to-Point Protocol, July 1994). RFC 1547 (Requirements for an Internet Standard Point-to-Point Protocol, December 1993) provides historical information about the need for PPP and its development. A series of related RFCs have been written to define how a variety of network control protocols-including TCP/IP, DECnet, AppleTalk, IPX, and others-work with PPP.
, The PPP Internet Protocol Control Protocol (IPCP)
, Standard 51, The Point-to-Point Protocol (PPP)
, Standard 51, PPP in HDLC-like Framing
, PPP Compression Control Protocol (CCP)
, PPP Serial Data transport Protocol
, PPP Internet Protocol Control Protocol Extensions for Name Server Addresses
, The PPP Multilink Protocol (MP)
, PPP Challenge Handshake Authentication Protocol (CHAP)
, Informational, PPP Vendor Extensions
, PPP Extensible Authentication Protocol (EAP)
, PPP over ATM
, PPP over Ethernet
, PPP over SONET/SDH
, The Multi-Class Extension to Multi-Link PPP
, Proposed Standard, PPP in a Real-time Oriented HDLC-like Framing
, IP Version 6 over PPP
, Negotiation for IPv6 Datagram Compression Using IPv6 Control Protocol
, PPP Transparent Interconnection of Lots of Links (TRILL) Protocol Control Protocol
Additional drafts:
PPP Internet Protocol Control Protocol Extensions for IP Subnet (draft)
PPP IPV6 Control Protocol Extensions for DNS Server Addresses (draft)
PPP Internet Protocol Control Protocol Extensions for Route Table Entries (draft)
PPP Consistent Overhead Byte Stuffing (draft) (cf. Consistent Overhead Byte Stuffing)
See also
Diameter
Extensible Authentication Protocol
Hayes command set
Link Access Procedure for Modems (LAPM)
Multiprotocol Encapsulation (MPE) for MPEG transport stream
Point-to-Point Protocol daemon (PPPD)
PPPoX
RADIUS
Unidirectional Lightweight Encapsulation (ULE) for MPEG transport stream
References
Internet Standards
Link protocols
Logical link control
Modems
Wide area networks |