Datasets:
de-francophones
commited on
Commit
•
3554813
1
Parent(s):
a81820d
4cb5d2acc9710df56ee8901c7607c8852da197796be55efab85905102140e9bf
Browse files- en/469.html.txt +137 -0
- en/4690.html.txt +222 -0
- en/4691.html.txt +87 -0
- en/4692.html.txt +222 -0
- en/4693.html.txt +90 -0
- en/4694.html.txt +90 -0
- en/4695.html.txt +42 -0
- en/4696.html.txt +27 -0
- en/4697.html.txt +296 -0
- en/4698.html.txt +261 -0
- en/4699.html.txt +35 -0
- en/47.html.txt +221 -0
- en/470.html.txt +49 -0
- en/4700.html.txt +35 -0
- en/4701.html.txt +23 -0
- en/4702.html.txt +158 -0
- en/4703.html.txt +78 -0
- en/4704.html.txt +100 -0
- en/4705.html.txt +126 -0
- en/4706.html.txt +0 -0
- en/4707.html.txt +0 -0
- en/4708.html.txt +140 -0
- en/4709.html.txt +140 -0
- en/471.html.txt +49 -0
- en/4710.html.txt +0 -0
- en/4711.html.txt +126 -0
- en/4712.html.txt +102 -0
- en/4713.html.txt +252 -0
- en/4714.html.txt +1 -0
- en/4715.html.txt +113 -0
- en/4716.html.txt +113 -0
- en/4717.html.txt +180 -0
- en/4718.html.txt +198 -0
- en/4719.html.txt +180 -0
- en/472.html.txt +164 -0
- en/4720.html.txt +172 -0
- en/4721.html.txt +172 -0
- en/4722.html.txt +172 -0
- en/4723.html.txt +230 -0
- en/4724.html.txt +230 -0
- en/4725.html.txt +48 -0
- en/4726.html.txt +48 -0
- en/4727.html.txt +106 -0
- en/4728.html.txt +11 -0
- en/4729.html.txt +176 -0
- en/473.html.txt +68 -0
- en/4730.html.txt +106 -0
- en/4731.html.txt +106 -0
- en/4732.html.txt +106 -0
- en/4733.html.txt +71 -0
en/469.html.txt
ADDED
@@ -0,0 +1,137 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
Autism is a developmental disorder characterized by difficulties with social interaction and communication, and by restricted and repetitive behavior.[6] Parents often notice signs during the first three years of their child's life.[1][6] These signs often develop gradually, though some children with autism experience worsening in their communication and social skills after reaching developmental milestones at a normal pace.[17]
|
6 |
+
|
7 |
+
Autism is associated with a combination of genetic and environmental factors.[7] Risk factors during pregnancy include certain infections, such as rubella, toxins including valproic acid, alcohol, cocaine, pesticides, lead, and air pollution, fetal growth restriction, and autoimmune diseases.[18][19][20] Controversies surround other proposed environmental causes; for example, the vaccine hypothesis, which has been disproven.[21] Autism affects information processing in the brain and how nerve cells and their synapses connect and organize; how this occurs is not well understood.[22] The Diagnostic and Statistical Manual of Mental Disorders (DSM-5), combines autism and less severe forms of the condition, including Asperger syndrome and pervasive developmental disorder not otherwise specified (PDD-NOS) into the diagnosis of autism spectrum disorder (ASD).[6][23]
|
8 |
+
|
9 |
+
Early behavioral interventions or speech therapy can help children with autism gain self-care, social, and communication skills.[9][10] Although there is no known cure,[9] there have been cases of children who recovered.[24] Some autistic adults are unable to live independently.[15] An autistic culture has developed, with some individuals seeking a cure and others believing autism should be accepted as a difference to be accommodated instead of cured.[25][26]
|
10 |
+
|
11 |
+
Globally, autism is estimated to affect 24.8 million people as of 2015[update].[16] In the 2000s, the number of people affected was estimated at 1–2 per 1,000 people worldwide.[27] In the developed countries, about 1.5% of children are diagnosed with ASD as of 2017[update],[28] from 0.7% in 2000 in the United States.[29] It occurs four-to-five times more often in males than females.[29] The number of people diagnosed has increased dramatically since the 1960s, which may be partly due to changes in diagnostic practice.[27] The question of whether actual rates have increased is unresolved.[27]
|
12 |
+
|
13 |
+
Autism is a highly variable, neurodevelopmental disorder[30] whose symptoms first appears during infancy or childhood, and generally follows a steady course without remission.[31] People with autism may be severely impaired in some respects but average, or even superior, in others.[32] Overt symptoms gradually begin after the age of six months, become established by age two or three years[33] and tend to continue through adulthood, although often in more muted form.[34] It is distinguished by a characteristic triad of symptoms: impairments in social interaction, impairments in communication, and repetitive behavior. Other aspects, such as atypical eating, are also common but are not essential for diagnosis.[35] Individual symptoms of autism occur in the general population and appear not to associate highly, without a sharp line separating pathologically severe from common traits.[36]
|
14 |
+
|
15 |
+
Social deficits distinguish autism and the related autism spectrum disorders (ASD; see Classification) from other developmental disorders.[34] People with autism have social impairments and often lack the intuition about others that many people take for granted. Noted autistic Temple Grandin described her inability to understand the social communication of neurotypicals, or people with typical neural development, as leaving her feeling "like an anthropologist on Mars".[37]
|
16 |
+
|
17 |
+
Unusual social development becomes apparent early in childhood. Autistic infants show less attention to social stimuli, smile and look at others less often, and respond less to their own name. Autistic toddlers differ more strikingly from social norms; for example, they have less eye contact and turn-taking, and do not have the ability to use simple movements to express themselves, such as pointing at things.[38] Three- to five-year-old children with autism are less likely to exhibit social understanding, approach others spontaneously, imitate and respond to emotions, communicate nonverbally, and take turns with others. However, they do form attachments to their primary caregivers.[39] Most children with autism display moderately less attachment security than neurotypical children, although this difference disappears in children with higher mental development or less pronounced autistic traits.[40] Older children and adults with ASD perform worse on tests of face and emotion recognition[41] although this may be partly due to a lower ability to define a person's own emotions.[42]
|
18 |
+
|
19 |
+
Children with high-functioning autism have more intense and frequent loneliness compared to non-autistic peers, despite the common belief that children with autism prefer to be alone. Making and maintaining friendships often proves to be difficult for those with autism. For them, the quality of friendships, not the number of friends, predicts how lonely they feel. Functional friendships, such as those resulting in invitations to parties, may affect the quality of life more deeply.[43]
|
20 |
+
|
21 |
+
There are many anecdotal reports, but few systematic studies, of aggression and violence in individuals with ASD. The limited data suggest that, in children with intellectual disability, autism is associated with aggression, destruction of property, and meltdowns.[44]
|
22 |
+
|
23 |
+
About a third to a half of individuals with autism do not develop enough natural speech to meet their daily communication needs.[45] Differences in communication may be present from the first year of life, and may include delayed onset of babbling, unusual gestures, diminished responsiveness, and vocal patterns that are not synchronized with the caregiver. In the second and third years, children with autism have less frequent and less diverse babbling, consonants, words, and word combinations; their gestures are less often integrated with words. Children with autism are less likely to make requests or share experiences, and are more likely to simply repeat others' words (echolalia)[46][47] or reverse pronouns.[48] Joint attention seems to be necessary for functional speech, and deficits in joint attention seem to distinguish infants with ASD.[23] For example, they may look at a pointing hand instead of the pointed-at object,[38][47] and they consistently fail to point at objects in order to comment on or share an experience.[23] Children with autism may have difficulty with imaginative play and with developing symbols into language.[46][47]
|
24 |
+
|
25 |
+
In a pair of studies, high-functioning children with autism aged 8–15 performed equally well as, and as adults better than, individually matched controls at basic language tasks involving vocabulary and spelling. Both autistic groups performed worse than controls at complex language tasks such as figurative language, comprehension and inference. As people are often sized up initially from their basic language skills, these studies suggest that people speaking to autistic individuals are more likely to overestimate what their audience comprehends.[49]
|
26 |
+
|
27 |
+
Autistic individuals can display many forms of repetitive or restricted behavior, which the Repetitive Behavior Scale-Revised (RBS-R) categorizes as follows.[50]
|
28 |
+
|
29 |
+
No single repetitive or self-injurious behavior seems to be specific to autism, but autism appears to have an elevated pattern of occurrence and severity of these behaviors.[51]
|
30 |
+
|
31 |
+
Autistic individuals may have symptoms that are independent of the diagnosis, but that can affect the individual or the family.[35]
|
32 |
+
An estimated 0.5% to 10% of individuals with ASD show unusual abilities, ranging from splinter skills such as the memorization of trivia to the extraordinarily rare talents of prodigious autistic savants.[52] Many individuals with ASD show superior skills in perception and attention, relative to the general population.[53] Sensory abnormalities are found in over 90% of those with autism, and are considered core features by some,[54] although there is no good evidence that sensory symptoms differentiate autism from other developmental disorders.[55] Differences are greater for under-responsivity (for example, walking into things) than for over-responsivity (for example, distress from loud noises) or for sensation seeking (for example, rhythmic movements).[56] An estimated 60–80% of autistic people have motor signs that include poor muscle tone, poor motor planning, and toe walking;[54] deficits in motor coordination are pervasive across ASD and are greater in autism proper.[57] Unusual eating behavior occurs in about three-quarters of children with ASD, to the extent that it was formerly a diagnostic indicator. Selectivity is the most common problem, although eating rituals and food refusal also occur.[58]
|
33 |
+
|
34 |
+
There is tentative evidence that autism occurs more frequently in people with gender dysphoria.[59][60]
|
35 |
+
|
36 |
+
Gastrointestinal problems are one of the most commonly associated medical disorders in people with autism.[61] These are linked to greater social impairment, irritability, behavior and sleep problems, language impairments and mood changes.[61][62]
|
37 |
+
|
38 |
+
Parents of children with ASD have higher levels of stress.[38] Siblings of children with ASD report greater admiration of and less conflict with the affected sibling than siblings of unaffected children and were similar to siblings of children with Down syndrome in these aspects of the sibling relationship. However, they reported lower levels of closeness and intimacy than siblings of children with Down syndrome; siblings of individuals with ASD have greater risk of negative well-being and poorer sibling relationships as adults.[63]
|
39 |
+
|
40 |
+
It has long been presumed that there is a common cause at the genetic, cognitive, and neural levels for autism's characteristic triad of symptoms.[64] However, there is increasing suspicion that autism is instead a complex disorder whose core aspects have distinct causes that often co-occur.[64][65]
|
41 |
+
|
42 |
+
Autism has a strong genetic basis, although the genetics of autism are complex and it is unclear whether ASD is explained more by rare mutations with major effects, or by rare multigene interactions of common genetic variants.[67][68] Complexity arises due to interactions among multiple genes, the environment, and epigenetic factors which do not change DNA sequencing but are heritable and influence gene expression.[34] Many genes have been associated with autism through sequencing the genomes of affected individuals and their parents.[69] Studies of twins suggest that heritability is 0.7 for autism and as high as 0.9 for ASD, and siblings of those with autism are about 25 times more likely to be autistic than the general population.[54] However, most of the mutations that increase autism risk have not been identified. Typically, autism cannot be traced to a Mendelian (single-gene) mutation or to a single chromosome abnormality, and none of the genetic syndromes associated with ASDs have been shown to selectively cause ASD.[67] Numerous candidate genes have been located, with only small effects attributable to any particular gene.[67] Most loci individually explain less than 1% of cases of autism.[70] The large number of autistic individuals with unaffected family members may result from spontaneous structural variation—such as deletions, duplications or inversions in genetic material during meiosis.[71][72] Hence, a substantial fraction of autism cases may be traceable to genetic causes that are highly heritable but not inherited: that is, the mutation that causes the autism is not present in the parental genome.[66] Autism may be underdiagnosed in women and girls due to an assumption that it is primarily a male condition,[73] but genetic phenomena such as imprinting and X linkage have the ability to raise the frequency and severity of conditions in males, and theories have been put forward for a genetic reason why males are diagnosed more often, such as the imprinted brain theory and the extreme male brain theory.[74][75][76]
|
43 |
+
|
44 |
+
Maternal nutrition and inflammation during preconception and pregnancy influences fetal neurodevelopment. Intrauterine growth restriction is associated with ASD, in both term and preterm infants.[19] Maternal inflammatory and autoimmune diseases may damage fetal tissues, aggravating a genetic problem or damaging the nervous system.[20]
|
45 |
+
|
46 |
+
Exposure to air pollution during pregnancy, especially heavy metals and particulates, may increase the risk of autism.[77][78] Environmental factors that have been claimed without evidence to contribute to or exacerbate autism include certain foods, infectious diseases, solvents, PCBs, phthalates and phenols used in plastic products, pesticides, brominated flame retardants, alcohol, smoking, illicit drugs, vaccines,[27] and prenatal stress. Some, such as the MMR vaccine, have been completely disproven.[79][80][81][82]
|
47 |
+
|
48 |
+
Parents may first become aware of autistic symptoms in their child around the time of a routine vaccination. This has led to unsupported theories blaming vaccine "overload", a vaccine preservative, or the MMR vaccine for causing autism.[83] The latter theory was supported by a litigation-funded study that has since been shown to have been "an elaborate fraud".[84] Although these theories lack convincing scientific evidence and are biologically implausible,[83] parental concern about a potential vaccine link with autism has led to lower rates of childhood immunizations, outbreaks of previously controlled childhood diseases in some countries, and the preventable deaths of several children.[85][86]
|
49 |
+
|
50 |
+
Autism's symptoms result from maturation-related changes in various systems of the brain. How autism occurs is not well understood. Its mechanism can be divided into two areas: the pathophysiology of brain structures and processes associated with autism, and the neuropsychological linkages between brain structures and behaviors.[87] The behaviors appear to have multiple pathophysiologies.[36]
|
51 |
+
|
52 |
+
There is evidence that gut–brain axis abnormalities may be involved.[61][62][88] A 2015 review proposed that immune dysregulation, gastrointestinal inflammation, malfunction of the autonomic nervous system, gut flora alterations, and food metabolites may cause brain neuroinflammation and dysfunction.[62] A 2016 review concludes that enteric nervous system abnormalities might play a role in neurological disorders such as autism. Neural connections and the immune system are a pathway that may allow diseases originated in the intestine to spread to the brain.[88]
|
53 |
+
|
54 |
+
Several lines of evidence point to synaptic dysfunction as a cause of autism.[22] Some rare mutations may lead to autism by disrupting some synaptic pathways, such as those involved with cell adhesion.[89] Gene replacement studies in mice suggest that autistic symptoms are closely related to later developmental steps that depend on activity in synapses and on activity-dependent changes.[90] All known teratogens (agents that cause birth defects) related to the risk of autism appear to act during the first eight weeks from conception, and though this does not exclude the possibility that autism can be initiated or affected later, there is strong evidence that autism arises very early in development.[91]
|
55 |
+
|
56 |
+
Diagnosis is based on behavior, not cause or mechanism.[36][92] Under the DSM-5, autism is characterized by persistent deficits in social communication and interaction across multiple contexts, as well as restricted, repetitive patterns of behavior, interests, or activities. These deficits are present in early childhood, typically before age three, and lead to clinically significant functional impairment.[6] Sample symptoms include lack of social or emotional reciprocity, stereotyped and repetitive use of language or idiosyncratic language, and persistent preoccupation with unusual objects. The disturbance must not be better accounted for by Rett syndrome, intellectual disability or global developmental delay.[6] ICD-10 uses essentially the same definition.[31]
|
57 |
+
|
58 |
+
Several diagnostic instruments are available. Two are commonly used in autism research: the Autism Diagnostic Interview-Revised (ADI-R) is a semistructured parent interview, and the Autism Diagnostic Observation Schedule (ADOS)[93] uses observation and interaction with the child. The Childhood Autism Rating Scale (CARS) is used widely in clinical environments to assess severity of autism based on observation of children.[38] The Diagnostic interview for social and communication disorders (DISCO) may also be used.[94]
|
59 |
+
|
60 |
+
A pediatrician commonly performs a preliminary investigation by taking developmental history and physically examining the child. If warranted, diagnosis and evaluations are conducted with help from ASD specialists, observing and assessing cognitive, communication, family, and other factors using standardized tools, and taking into account any associated medical conditions.[95] A pediatric neuropsychologist is often asked to assess behavior and cognitive skills, both to aid diagnosis and to help recommend educational interventions.[96] A differential diagnosis for ASD at this stage might also consider intellectual disability, hearing impairment, and a specific language impairment[95] such as Landau–Kleffner syndrome.[97] The presence of autism can make it harder to diagnose coexisting psychiatric disorders such as depression.[98]
|
61 |
+
|
62 |
+
Clinical genetics evaluations are often done once ASD is diagnosed, particularly when other symptoms already suggest a genetic cause.[99] Although genetic technology allows clinical geneticists to link an estimated 40% of cases to genetic causes,[100] consensus guidelines in the US and UK are limited to high-resolution chromosome and fragile X testing.[99] A genotype-first model of diagnosis has been proposed, which would routinely assess the genome's copy number variations.[101] As new genetic tests are developed several ethical, legal, and social issues will emerge. Commercial availability of tests may precede adequate understanding of how to use test results, given the complexity of autism's genetics.[102] Metabolic and neuroimaging tests are sometimes helpful, but are not routine.[99]
|
63 |
+
|
64 |
+
ASD can sometimes be diagnosed by age 14 months, although diagnosis becomes increasingly stable over the first three years of life: for example, a one-year-old who meets diagnostic criteria for ASD is less likely than a three-year-old to continue to do so a few years later.[1] In the UK the National Autism Plan for Children recommends at most 30 weeks from first concern to completed diagnosis and assessment, though few cases are handled that quickly in practice.[95] Although the symptoms of autism and ASD begin early in childhood, they are sometimes missed; years later, adults may seek diagnoses to help them or their friends and family understand themselves, to help their employers make adjustments, or in some locations to claim disability living allowances or other benefits. Girls are often diagnosed later than boys.[103]
|
65 |
+
|
66 |
+
Underdiagnosis and overdiagnosis are problems in marginal cases, and much of the recent increase in the number of reported ASD cases is likely due to changes in diagnostic practices. The increasing popularity of drug treatment options and the expansion of benefits has given providers incentives to diagnose ASD, resulting in some overdiagnosis of children with uncertain symptoms. Conversely, the cost of screening and diagnosis and the challenge of obtaining payment can inhibit or delay diagnosis.[104] It is particularly hard to diagnose autism among the visually impaired, partly because some of its diagnostic criteria depend on vision, and partly because autistic symptoms overlap with those of common blindness syndromes or blindisms.[105]
|
67 |
+
|
68 |
+
Autism is one of the five pervasive developmental disorders (PDD), which are characterized by widespread abnormalities of social interactions and communication, and severely restricted interests and highly repetitive behavior.[31] These symptoms do not imply sickness, fragility, or emotional disturbance.[34]
|
69 |
+
|
70 |
+
Of the five PDD forms, Asperger syndrome is closest to autism in signs and likely causes; Rett syndrome and childhood disintegrative disorder share several signs with autism, but may have unrelated causes; PDD not otherwise specified (PDD-NOS; also called atypical autism) is diagnosed when the criteria are not met for a more specific disorder.[106] Unlike with autism, people with Asperger syndrome have no substantial delay in language development.[107] The terminology of autism can be bewildering, with autism, Asperger syndrome and PDD-NOS often called the autism spectrum disorders (ASD)[9] or sometimes the autistic disorders,[108] whereas autism itself is often called autistic disorder, childhood autism, or infantile autism. In this article, autism refers to the classic autistic disorder; in clinical practice, though, autism, ASD, and PDD are often used interchangeably.[99] ASD, in turn, is a subset of the broader autism phenotype, which describes individuals who may not have ASD but do have autistic-like traits, such as avoiding eye contact.[109]
|
71 |
+
|
72 |
+
Autism can also be divided into syndromal and non-syndromal autism; the syndromal autism is associated with severe or profound intellectual disability or a congenital syndrome with physical symptoms, such as tuberous sclerosis.[110] Although individuals with Asperger syndrome tend to perform better cognitively than those with autism, the extent of the overlap between Asperger syndrome, HFA, and non-syndromal autism is unclear.[111]
|
73 |
+
|
74 |
+
Some studies have reported diagnoses of autism in children due to a loss of language or social skills, as opposed to a failure to make progress, typically from 15 to 30 months of age. The validity of this distinction remains controversial; it is possible that regressive autism is a specific subtype,[1][17][46][112] or that there is a continuum of behaviors between autism with and without regression.[113]
|
75 |
+
|
76 |
+
Research into causes has been hampered by the inability to identify biologically meaningful subgroups within the autistic population[114] and by the traditional boundaries between the disciplines of psychiatry, psychology, neurology and pediatrics.[115] Newer technologies such as fMRI and diffusion tensor imaging can help identify biologically relevant phenotypes (observable traits) that can be viewed on brain scans, to help further neurogenetic studies of autism;[116] one example is lowered activity in the fusiform face area of the brain, which is associated with impaired perception of people versus objects.[22] It has been proposed to classify autism using genetics as well as behavior.[117]
|
77 |
+
|
78 |
+
Autism has long been thought to cover a wide spectrum, ranging from individuals with severe impairments—who may be silent, developmentally disabled, and prone to frequent repetitive behavior such as hand flapping and rocking—to high functioning individuals who may have active but distinctly odd social approaches, narrowly focused interests, and verbose, pedantic communication.[118] Because the behavior spectrum is continuous, boundaries between diagnostic categories are necessarily somewhat arbitrary.[54] Sometimes the syndrome is divided into low-, medium- or high-functioning autism (LFA, MFA, and HFA), based on IQ thresholds.[119] Some people have called for an end to the terms "high-functioning" and "low-functioning" due to lack of nuance and the potential for a person's needs or abilities to be overlooked.[120][121]
|
79 |
+
|
80 |
+
About half of parents of children with ASD notice their child's unusual behaviors by age 18 months, and about four-fifths notice by age 24 months.[1] According to an article, failure to meet any of the following milestones "is an absolute indication to proceed with further evaluations. Delay in referral for such testing may delay early diagnosis and treatment and affect the long-term outcome".[35]
|
81 |
+
|
82 |
+
The United States Preventive Services Task Force in 2016 found it was unclear if screening was beneficial or harmful among children in whom there is no concerns.[123] The Japanese practice is to screen all children for ASD at 18 and 24 months, using autism-specific formal screening tests. In contrast, in the UK, children whose families or doctors recognize possible signs of autism are screened. It is not known which approach is more effective.[22] Screening tools include the Modified Checklist for Autism in Toddlers (M-CHAT), the Early Screening of Autistic Traits Questionnaire, and the First Year Inventory; initial data on M-CHAT and its predecessor, the Checklist for Autism in Toddlers (CHAT), on children aged 18–30 months suggests that it is best used in a clinical setting and that it has low sensitivity (many false-negatives) but good specificity (few false-positives).[1] It may be more accurate to precede these tests with a broadband screener that does not distinguish ASD from other developmental disorders.[124] Screening tools designed for one culture's norms for behaviors like eye contact may be inappropriate for a different culture.[125] Although genetic screening for autism is generally still impractical, it can be considered in some cases, such as children with neurological symptoms and dysmorphic features.[126]
|
83 |
+
|
84 |
+
While infection with rubella during pregnancy causes fewer than 1% of cases of autism,[127] vaccination against rubella can prevent many of those cases.[128]
|
85 |
+
|
86 |
+
The main goals when treating children with autism are to lessen associated deficits and family distress, and to increase quality of life and functional independence. In general, higher IQs are correlated with greater responsiveness to treatment and improved treatment outcomes.[130][131] No single treatment is best and treatment is typically tailored to the child's needs.[9] Families and the educational system are the main resources for treatment.[22] Services should be carried out by behavior analysts, special education teachers, speech pathologists, and licensed psychologists. Studies of interventions have methodological problems that prevent definitive conclusions about efficacy.[132] However, the development of evidence-based interventions has advanced in recent years.[130] Although many psychosocial interventions have some positive evidence, suggesting that some form of treatment is preferable to no treatment, the methodological quality of systematic reviews of these studies has generally been poor, their clinical results are mostly tentative, and there is little evidence for the relative effectiveness of treatment options.[133] Intensive, sustained special education programs and behavior therapy early in life can help children acquire self-care, communication, and job skills,[9] and often improve functioning and decrease symptom severity and maladaptive behaviors;[134] claims that intervention by around age three years is crucial are not substantiated.[135] While medications have not been found to help with core symptoms, they may be used for associated symptoms, such as irritability, inattention, or repetitive behavior patterns.[12]
|
87 |
+
|
88 |
+
Educational interventions often used include applied behavior analysis (ABA), developmental models, structured teaching, speech and language therapy, social skills therapy, and occupational therapy and cognitive behavioral interventions in adults without intellectual disability to reduce depression, anxiety, and obsessive-compulsive disorder.[9][136] Among these approaches, interventions either treat autistic features comprehensively, or focalize treatment on a specific area of deficit.[130] The quality of research for early intensive behavioral intervention (EIBI)—a treatment procedure incorporating over thirty hours per week of the structured type of ABA that is carried out with very young children—is currently low, and more vigorous research designs with larger sample sizes are needed.[137] Two theoretical frameworks outlined for early childhood intervention include structured and naturalistic ABA interventions, and developmental social pragmatic models (DSP).[130] One interventional strategy utilizes a parent training model, which teaches parents how to implement various ABA and DSP techniques, allowing for parents to disseminate interventions themselves.[130] Various DSP programs have been developed to explicitly deliver intervention systems through at-home parent implementation. Despite the recent development of parent training models, these interventions have demonstrated effectiveness in numerous studies, being evaluated as a probable efficacious mode of treatment.[130]
|
89 |
+
|
90 |
+
Early, intensive ABA therapy has demonstrated effectiveness in enhancing communication and adaptive functioning in preschool children;[9][138] it is also well-established for improving the intellectual performance of that age group.[9][134][138] Similarly, a teacher-implemented intervention that utilizes a more naturalistic form of ABA combined with a developmental social pragmatic approach has been found to be beneficial in improving social-communication skills in young children, although there is less evidence in its treatment of global symptoms.[130] Neuropsychological reports are often poorly communicated to educators, resulting in a gap between what a report recommends and what education is provided.[96] It is not known whether treatment programs for children lead to significant improvements after the children grow up,[134] and the limited research on the effectiveness of adult residential programs shows mixed results.[139] The appropriateness of including children with varying severity of autism spectrum disorders in the general education population is a subject of current debate among educators and researchers.[140]
|
91 |
+
|
92 |
+
Medications may be used to treat ASD symptoms that interfere with integrating a child into home or school when behavioral treatment fails.[10] They may also be used for associated health problems, such as ADHD or anxiety.[10] More than half of US children diagnosed with ASD are prescribed psychoactive drugs or anticonvulsants, with the most common drug classes being antidepressants, stimulants, and antipsychotics.[13][14] The atypical antipsychotic drugs risperidone and aripiprazole are FDA-approved for treating associated aggressive and self-injurious behaviors.[12][34][141] However, their side effects must be weighed against their potential benefits, and people with autism may respond atypically.[12] Side effects, for example, may include weight gain, tiredness, drooling, and aggression.[12] SSRI antidepressants, such as fluoxetine and fluvoxamine, have been shown to be effective in reducing repetitive and ritualistic behaviors, while the stimulant medication methylphenidate is beneficial for some children with co-morbid inattentiveness or hyperactivity.[9] There is scant reliable research about the effectiveness or safety of drug treatments for adolescents and adults with ASD.[142] No known medication relieves autism's core symptoms of social and communication impairments.[143] Experiments in mice have reversed or reduced some symptoms related to autism by replacing or modulating gene function,[90][144] suggesting the possibility of targeting therapies to specific rare mutations known to cause autism.[89][145]
|
93 |
+
|
94 |
+
Although many alternative therapies and interventions are available, few are supported by scientific studies.[41][146] Treatment approaches have little empirical support in quality-of-life contexts, and many programs focus on success measures that lack predictive validity and real-world relevance.[43] Some alternative treatments may place the child at risk. The preference that children with autism have for unconventional foods can lead to reduction in bone cortical thickness with this being greater in those on casein-free diets, as a consequence of the low intake of calcium and vitamin D; however, suboptimal bone development in ASD has also been associated with lack of exercise and gastrointestinal disorders.[147] In 2005, botched chelation therapy killed a five-year-old child with autism.[148][149] Chelation is not recommended for people with ASD since the associated risks outweigh any potential benefits.[150] Another alternative medicine practice with no evidence is CEASE therapy, a mixture of homeopathy, supplements, and 'vaccine detoxing'.[151][152]
|
95 |
+
|
96 |
+
Although popularly used as an alternative treatment for people with autism, as of 2018 there is no good evidence to recommend a gluten- and casein-free diet as a standard treatment.[153][154][155] A 2018 review concluded that it may be a therapeutic option for specific groups of children with autism, such as those with known food intolerances or allergies, or with food intolerance markers. The authors analyzed the prospective trials conducted to date that studied the efficacy of the gluten- and casein-free diet in children with ASD (4 in total). All of them compared gluten- and casein-free diet versus normal diet with a control group (2 double-blind randomized controlled trials, 1 double-blind crossover trial, 1 single-blind trial). In two of the studies, whose duration was 12 and 24 months, a significant improvement in ASD symptoms (efficacy rate 50%) was identified. In the other two studies, whose duration was 3 months, no significant effect was observed.[153] The authors concluded that a longer duration of the diet may be necessary to achieve the improvement of the ASD symptoms.[153] Other problems documented in the trials carried out include transgressions of the diet, small sample size, the heterogeneity of the participants and the possibility of a placebo effect.[155][156] In the subset of people who have gluten sensitivity there is limited evidence that suggests that a gluten-free diet may improve some autistic behaviors.[157][158][159]
|
97 |
+
|
98 |
+
Results of a systematic review on interventions to address health outcomes among autistic adults found emerging evidence to support mindfulness-based interventions for improving mental health. This includes decreasing stress, anxiety, ruminating thoughts, anger, and aggression.[136] There is tentative evidence that music therapy may improve social interactions, verbal communication, and non-verbal communication skills.[160] There has been early research looking at hyperbaric treatments in children with autism.[161] Studies on pet therapy have shown positive effects.[162]
|
99 |
+
|
100 |
+
There is no known cure.[9][22] The degree of symptoms can decrease, occasionally to the extent that people lose their diagnosis of ASD;[24] this occurs sometimes after intensive treatment and sometimes not. It is not known how often recovery happens;[134] reported rates in unselected samples have ranged from 3% to 25%.[24] Most children with autism acquire language by age five or younger, though a few have developed communication skills in later years.[163] Many children with autism lack social support, future employment opportunities or self-determination.[43] Although core difficulties tend to persist, symptoms often become less severe with age.[34]
|
101 |
+
|
102 |
+
Few high-quality studies address long-term prognosis. Some adults show modest improvement in communication skills, but a few decline; no study has focused on autism after midlife.[164] Acquiring language before age six, having an IQ above 50, and having a marketable skill all predict better outcomes; independent living is unlikely with severe autism.[165]
|
103 |
+
|
104 |
+
Many individuals with autism face significant obstacles in transitioning to adulthood.[166] Compared to the general population individuals with autism are more likely to be unemployed and to have never had a job. About half of people in their 20s with autism are not employed.[167]
|
105 |
+
|
106 |
+
Most recent reviews tend to estimate a prevalence of 1–2 per 1,000 for autism and close to 6 per 1,000 for ASD as of 2007.[27] A 2016 survey in the United States reported a rate of 25 per 1,000 children for ASD.[168] Globally, autism affects an estimated 24.8 million people as of 2015[update], while Asperger syndrome affects a further 37.2 million.[16] In 2012, the NHS estimated that the overall prevalence of autism among adults aged 18 years and over in the UK was 1.1%.[169] Rates of PDD-NOS's has been estimated at 3.7 per 1,000, Asperger syndrome at roughly 0.6 per 1,000, and childhood disintegrative disorder at 0.02 per 1,000.[170] CDC estimates about 1 out of 59 (1.7%) for 2014, an increase from 1 out of every 68 children (1.5%) for 2010.[171]
|
107 |
+
|
108 |
+
The number of reported cases of autism increased dramatically in the 1990s and early 2000s. This increase is largely attributable to changes in diagnostic practices, referral patterns, availability of services, age at diagnosis, and public awareness,[170][172] though unidentified environmental risk factors cannot be ruled out.[21] The available evidence does not rule out the possibility that autism's true prevalence has increased;[170] a real increase would suggest directing more attention and funding toward changing environmental factors instead of continuing to focus on genetics.[173]
|
109 |
+
|
110 |
+
Boys are at higher risk for ASD than girls. The sex ratio averages 4.3:1 and is greatly modified by cognitive impairment: it may be close to 2:1 with intellectual disability and more than 5.5:1 without.[27] Several theories about the higher prevalence in males have been investigated, but the cause of the difference is unconfirmed;[174] one theory is that females are underdiagnosed.[175]
|
111 |
+
|
112 |
+
Although the evidence does not implicate any single pregnancy-related risk factor as a cause of autism, the risk of autism is associated with advanced age in either parent, and with diabetes, bleeding, and use of psychiatric drugs in the mother during pregnancy.[174][176] The risk is greater with older fathers than with older mothers; two potential explanations are the known increase in mutation burden in older sperm, and the hypothesis that men marry later if they carry genetic liability and show some signs of autism.[30] Most professionals believe that race, ethnicity, and socioeconomic background do not affect the occurrence of autism.[177]
|
113 |
+
|
114 |
+
Several other conditions are common in children with autism.[22] They include:
|
115 |
+
|
116 |
+
A few examples of autistic symptoms and treatments were described long before autism was named. The Table Talk of Martin Luther, compiled by his notetaker, Mathesius, contains the story of a 12-year-old boy who may have been severely autistic.[190] Luther reportedly thought the boy was a soulless mass of flesh possessed by the devil, and suggested that he be suffocated, although a later critic has cast doubt on the veracity of this report.[191] The earliest well-documented case of autism is that of Hugh Blair of Borgue, as detailed in a 1747 court case in which his brother successfully petitioned to annul Blair's marriage to gain Blair's inheritance.[192] The Wild Boy of Aveyron, a feral child caught in 1798, showed several signs of autism; the medical student Jean Itard treated him with a behavioral program designed to help him form social attachments and to induce speech via imitation.[189]
|
117 |
+
|
118 |
+
The New Latin word autismus (English translation autism) was coined by the Swiss psychiatrist Eugen Bleuler in 1910 as he was defining symptoms of schizophrenia. He derived it from the Greek word autós (αὐτός, meaning "self"), and used it to mean morbid self-admiration, referring to "autistic withdrawal of the patient to his fantasies, against which any influence from outside becomes an intolerable disturbance".[193] A Soviet child psychiatrist, Grunya Sukhareva, described a similar syndrome that was published in Russian in 1925, and in German in 1926.[194]
|
119 |
+
|
120 |
+
The word autism first took its modern sense in 1938 when Hans Asperger of the Vienna University Hospital adopted Bleuler's terminology autistic psychopaths in a lecture in German about child psychology.[195] Asperger was investigating an ASD now known as Asperger syndrome, though for various reasons it was not widely recognized as a separate diagnosis until 1981.[189] Leo Kanner of the Johns Hopkins Hospital first used autism in its modern sense in English when he introduced the label early infantile autism in a 1943 report of 11 children with striking behavioral similarities.[48] Almost all the characteristics described in Kanner's first paper on the subject, notably "autistic aloneness" and "insistence on sameness", are still regarded as typical of the autistic spectrum of disorders.[65] It is not known whether Kanner derived the term independently of Asperger.[196]
|
121 |
+
|
122 |
+
Donald Triplett was the first person diagnosed with autism.[197] He was diagnosed by Kanner after being first examined in 1938, and was labeled as "case 1".[197] Triplett was noted for his savant abilities, particularly being able to name musical notes played on a piano and to mentally multiply numbers. His father, Oliver, described him as socially withdrawn but interested in number patterns, music notes, letters of the alphabet, and U.S. president pictures. By the age of 2, he had the ability to recite the 23rd Psalm and memorized 25 questions and answers from the Presbyterian catechism. He was also interested in creating musical chords.[198]
|
123 |
+
|
124 |
+
Kanner's reuse of autism led to decades of confused terminology like infantile schizophrenia, and child psychiatry's focus on maternal deprivation led to misconceptions of autism as an infant's response to "refrigerator mothers". Starting in the late 1960s autism was established as a separate syndrome.[199]
|
125 |
+
|
126 |
+
As late as the mid-1970s there was little evidence of a genetic role in autism; while in 2007 it was believed to be one of the most heritable psychiatric conditions.[200] Although the rise of parent organizations and the destigmatization of childhood ASD have affected how ASD is viewed,[189] parents continue to feel social stigma in situations where their child's autistic behavior is perceived negatively,[201] and many primary care physicians and medical specialists express some beliefs consistent with outdated autism research.[202]
|
127 |
+
|
128 |
+
It took until 1980 for the DSM-III to differentiate autism from childhood schizophrenia. In 1987, the DSM-III-R provided a checklist for diagnosing autism. In May 2013, the DSM-5 was released, updating the classification for pervasive developmental disorders. The grouping of disorders, including PDD-NOS, autism, Asperger syndrome, Rett syndrome, and CDD, has been removed and replaced with the general term of Autism Spectrum Disorders. The two categories that exist are impaired social communication and/or interaction, and restricted and/or repetitive behaviors.[203]
|
129 |
+
|
130 |
+
The Internet has helped autistic individuals bypass nonverbal cues and emotional sharing that they find difficult to deal with, and has given them a way to form online communities and work remotely.[204] Societal and cultural aspects of autism have developed: some in the community seek a cure, while others believe that autism is simply another way of being.[25][26][205]
|
131 |
+
|
132 |
+
An autistic culture has emerged, accompanied by the autistic rights and neurodiversity movements.[206][207][208] Events include World Autism Awareness Day, Autism Sunday, Autistic Pride Day, Autreat, and others.[209][210][211][212] Organizations dedicated to promoting awareness of autism include Autistic Self Advocacy Network, Aspies For Freedom, Autism National Committee, and Autism Society of America. At the same time, some organizations, including Autism Speaks, have been condemned by disability rights organizations for failing to support autistic people.[213] Social-science scholars study those with autism in hopes to learn more about "autism as a culture, transcultural comparisons... and research on social movements."[214] While most autistic individuals do not have savant skills, many have been successful in their fields.[215][216][217]
|
133 |
+
|
134 |
+
The autism rights movement is a social movement within the context of disability rights that emphasizes the concept of neurodiversity, viewing the autism spectrum as a result of natural variations in the human brain rather than a disorder to be cured.[208] The autism rights movement advocates for including greater acceptance of autistic behaviors; therapies that focus on coping skills rather than on imitating the behaviors of those without autism,[218] and the recognition of the autistic community as a minority group.[218][219] Autism rights or neurodiversity advocates believe that the autism spectrum is genetic and should be accepted as a natural expression of the human genome. This perspective is distinct from two other likewise distinct views: the medical perspective, that autism is caused by a genetic defect and should be addressed by targeting the autism gene(s), and fringe theories that autism is caused by environmental factors such as vaccines.[208] A common criticism against autistic activists is that the majority of them are "high-functioning" or have Asperger syndrome and do not represent the views of "low-functioning" autistic people.[219]
|
135 |
+
|
136 |
+
About half of autistics are unemployed, and one third of those with graduate degrees may be unemployed.[220] Among autistics who find work, most are employed in sheltered settings working for wages below the national minimum.[221] While employers state hiring concerns about productivity and supervision, experienced employers of autistics give positive reports of above average memory and detail orientation as well as a high regard for rules and procedure in autistic employees.[220] A majority of the economic burden of autism is caused by decreased earnings in the job market.[222] Some studies also find decreased earning among parents who care for autistic children.[223][224]
|
137 |
+
|
en/4690.html.txt
ADDED
@@ -0,0 +1,222 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
Fish are gill-bearing aquatic craniate animals that lack limbs with digits. They form a sister group to the tunicates, together forming the olfactores. Included in this definition are the living hagfish, lampreys, and cartilaginous and bony fish as well as various extinct related groups.
|
6 |
+
|
7 |
+
The earliest organisms that can be classified as fish were soft-bodied chordates that first appeared during the Cambrian period. Although they lacked a true spine, they possessed notochords which allowed them to be more agile than their invertebrate counterparts. Fish would continue to evolve through the Paleozoic era, diversifying into a wide variety of forms. Many fish of the Paleozoic developed external armor that protected them from predators. The first fish with jaws appeared in the Silurian period, after which many (such as sharks) became formidable marine predators rather than just the prey of arthropods.
|
8 |
+
|
9 |
+
Most fish are ectothermic ("cold-blooded"), allowing their body temperatures to vary as ambient temperatures change, though some of the large active swimmers like white shark and tuna can hold a higher core temperature.[1][2]
|
10 |
+
|
11 |
+
Fish can communicate in their underwater environments through the use of acoustic communication. Acoustic communication in fish involves the transmission of acoustic signals from one individual of a species to another. The production of sounds as a means of communication among fish is most often used in the context of feeding, aggression or courtship behaviour.[3] The sounds emitted by fish can vary depending on the species and stimulus involved. They can produce either stridulatory sounds by moving components of the skeletal system, or can produce non-stridulatory sounds by manipulating specialized organs such as the swimbladder.[4]
|
12 |
+
|
13 |
+
Fish are abundant in most bodies of water. They can be found in nearly all aquatic environments, from high mountain streams (e.g., char and gudgeon) to the abyssal and even hadal depths of the deepest oceans (e.g., cusk-eels and snailfish), although no species has yet been documented in the deepest 25% of the ocean.[5] With 34,300 described species, fish exhibit greater species diversity than any other group of vertebrates.[6]
|
14 |
+
|
15 |
+
Fish are an important resource for humans worldwide, especially as food. Commercial and subsistence fishers hunt fish in wild fisheries (see fishing) or farm them in ponds or in cages in the ocean (see aquaculture). They are also caught by recreational fishers, kept as pets, raised by fishkeepers, and exhibited in public aquaria. Fish have had a role in culture through the ages, serving as deities, religious symbols, and as the subjects of art, books and movies.
|
16 |
+
|
17 |
+
Tetrapods emerged within lobe-finned fishes, so cladistically they are fish as well. However, traditionally fish are rendered paraphyletic by excluding the tetrapods (i.e., the amphibians, reptiles, birds and mammals which all descended from within the same ancestry). Because in this manner the term "fish" is defined negatively as a paraphyletic group, it is not considered a formal taxonomic grouping in systematic biology, unless it is used in the cladistic sense, including tetrapods.[7][8] The traditional term pisces (also ichthyes) is considered a typological, but not a phylogenetic classification.
|
18 |
+
|
19 |
+
The word for fish in English and the other Germanic languages (German fisch; Gothic fisks) is inherited from Proto-Germanic, and is related to the Latin piscis and Old Irish īasc, though the exact root is unknown; some authorities reconstruct an Proto-Indo-European root *peysk-, attested only in Italic, Celtic, and Germanic.[9][10][11][12]
|
20 |
+
|
21 |
+
Fish, as vertebrata, developed as sister of the tunicata. As the tetrapods emerged deep within the fishes group, as sister of the lungfish, characteristics of fish are typically shared by tetrapods, including having vertebrae and a cranium.
|
22 |
+
|
23 |
+
Early fish from the fossil record are represented by a group of small, jawless, armored fish known as ostracoderms. Jawless fish lineages are mostly extinct. An extant clade, the lampreys may approximate ancient pre-jawed fish. The first jaws are found in Placodermi fossils. They lacked distinct teeth, having instead the oral surfaces of their jaw plates modified to serve the various purposes of teeth. The diversity of jawed vertebrates may indicate the evolutionary advantage of a jawed mouth. It is unclear if the advantage of a hinged jaw is greater biting force, improved respiration, or a combination of factors.
|
24 |
+
|
25 |
+
Fish may have evolved from a creature similar to a coral-like sea squirt, whose larvae resemble primitive fish in important ways. The first ancestors of fish may have kept the larval form into adulthood (as some sea squirts do today), although perhaps the reverse is the case.
|
26 |
+
|
27 |
+
Fish are a paraphyletic group: that is, any clade containing all fish also contains the tetrapods, which are not fish. For this reason, groups such as the class Pisces seen in older reference works are no longer used in formal classifications.
|
28 |
+
|
29 |
+
Traditional classification divides fish into three extant classes, and with extinct forms sometimes classified within the tree, sometimes as their own classes:[14][15]
|
30 |
+
|
31 |
+
The above scheme is the one most commonly encountered in non-specialist and general works. Many of the above groups are paraphyletic, in that they have given rise to successive groups: Agnathans are ancestral to Chondrichthyes, who again have given rise to Acanthodiians, the ancestors of Osteichthyes. With the arrival of phylogenetic nomenclature, the fishes has been split up into a more detailed scheme, with the following major groups:
|
32 |
+
|
33 |
+
† – indicates extinct taxonSome palaeontologists contend that because Conodonta are chordates, they are primitive fish. For a fuller treatment of this taxonomy, see the vertebrate article.
|
34 |
+
|
35 |
+
The position of hagfish in the phylum Chordata is not settled. Phylogenetic research in 1998 and 1999 supported the idea that the hagfish and the lampreys form a natural group, the Cyclostomata, that is a sister group of the Gnathostomata.[16][17]
|
36 |
+
|
37 |
+
The various fish groups account for more than half of vertebrate species. There are almost 28,000 known extant species, of which almost 27,000 are bony fish, with 970 sharks, rays, and chimeras and about 108 hagfish and lampreys.[18] A third of these species fall within the nine largest families; from largest to smallest, these families are Cyprinidae, Gobiidae, Cichlidae, Characidae, Loricariidae, Balitoridae, Serranidae, Labridae, and Scorpaenidae. About 64 families are monotypic, containing only one species. The final total of extant species may grow to exceed 32,500.[19]
|
38 |
+
|
39 |
+
Agnatha (Pacific hagfish)
|
40 |
+
|
41 |
+
Chondrichthyes (Horn shark)
|
42 |
+
|
43 |
+
Actinopterygii (Brown trout)
|
44 |
+
|
45 |
+
Sarcopterygii (Coelacanth)
|
46 |
+
|
47 |
+
The term "fish" most precisely describes any non-tetrapod craniate (i.e. an animal with a skull and in most cases a backbone) that has gills throughout life and whose limbs, if any, are in the shape of fins.[21] Unlike groupings such as birds or mammals, fish are not a single clade but a paraphyletic collection of taxa, including hagfishes, lampreys, sharks and rays, ray-finned fish, coelacanths, and lungfish.[22][23] Indeed, lungfish and coelacanths are closer relatives of tetrapods (such as mammals, birds, amphibians, etc.) than of other fish such as ray-finned fish or sharks, so the last common ancestor of all fish is also an ancestor to tetrapods. As paraphyletic groups are no longer recognised in modern systematic biology, the use of the term "fish" as a biological group must be avoided.
|
48 |
+
|
49 |
+
Many types of aquatic animals commonly referred to as "fish" are not fish in the sense given above; examples include shellfish, cuttlefish, starfish, crayfish and jellyfish. In earlier times, even biologists did not make a distinction – sixteenth century natural historians classified also seals, whales, amphibians, crocodiles, even hippopotamuses, as well as a host of aquatic invertebrates, as fish.[24] However, according to the definition above, all mammals, including cetaceans like whales and dolphins, are not fish. In some contexts, especially in aquaculture, the true fish are referred to as finfish (or fin fish) to distinguish them from these other animals.
|
50 |
+
|
51 |
+
A typical fish is ectothermic, has a streamlined body for rapid swimming, extracts oxygen from water using gills or uses an accessory breathing organ to breathe atmospheric oxygen, has two sets of paired fins, usually one or two (rarely three) dorsal fins, an anal fin, and a tail fin, has jaws, has skin that is usually covered with scales, and lays eggs.
|
52 |
+
|
53 |
+
Each criterion has exceptions. Tuna, swordfish, and some species of sharks show some warm-blooded adaptations – they can heat their bodies significantly above ambient water temperature.[22] Streamlining and swimming performance varies from fish such as tuna, salmon, and jacks that can cover 10–20 body-lengths per second to species such as eels and rays that swim no more than 0.5 body-lengths per second.[25] Many groups of freshwater fish extract oxygen from the air as well as from the water using a variety of different structures. Lungfish have paired lungs similar to those of tetrapods, gouramis have a structure called the labyrinth organ that performs a similar function, while many catfish, such as Corydoras extract oxygen via the intestine or stomach.[26] Body shape and the arrangement of the fins is highly variable, covering such seemingly un-fishlike forms as seahorses, pufferfish, anglerfish, and gulpers. Similarly, the surface of the skin may be naked (as in moray eels), or covered with scales of a variety of different types usually defined as placoid (typical of sharks and rays), cosmoid (fossil lungfish and coelacanths), ganoid (various fossil fish but also living gars and bichirs), cycloid, and ctenoid (these last two are found on most bony fish).[27] There are even fish that live mostly on land or lay their eggs on land near water.[28] Mudskippers feed and interact with one another on mudflats and go underwater to hide in their burrows.[29] A single, undescribed species of Phreatobius, has been called a true "land fish" as this worm-like catfish strictly lives among waterlogged leaf litter.[30][31] Many species live in underground lakes, underground rivers or aquifers and are popularly known as cavefish.[32]
|
54 |
+
|
55 |
+
Fish range in size from the huge 16-metre (52 ft) whale shark to the tiny 8-millimetre (0.3 in) stout infantfish.
|
56 |
+
|
57 |
+
Fish species diversity is roughly divided equally between marine (oceanic) and freshwater ecosystems. Coral reefs in the Indo-Pacific constitute the center of diversity for marine fishes, whereas continental freshwater fishes are most diverse in large river basins of tropical rainforests, especially the Amazon, Congo, and Mekong basins. More than 5,600 fish species inhabit Neotropical freshwaters alone, such that Neotropical fishes represent about 10% of all vertebrate species on the Earth. Exceptionally rich sites in the Amazon basin, such as Cantão State Park, can contain more freshwater fish species than occur in all of Europe.[33]
|
58 |
+
|
59 |
+
Most fish exchange gases using gills on either side of the pharynx. Gills consist of threadlike structures called filaments. Each filament contains a capillary network that provides a large surface area for exchanging oxygen and carbon dioxide. Fish exchange gases by pulling oxygen-rich water through their mouths and pumping it over their gills. In some fish, capillary blood flows in the opposite direction to the water, causing countercurrent exchange. The gills push the oxygen-poor water out through openings in the sides of the pharynx. Some fish, like sharks and lampreys, possess multiple gill openings. However, bony fish have a single gill opening on each side. This opening is hidden beneath a protective bony cover called an operculum.
|
60 |
+
|
61 |
+
Juvenile bichirs have external gills, a very primitive feature that they share with larval amphibians.
|
62 |
+
|
63 |
+
Fish from multiple groups can live out of the water for extended periods. Amphibious fish such as the mudskipper can live and move about on land for up to several days,[dubious – discuss] or live in stagnant or otherwise oxygen depleted water. Many such fish can breathe air via a variety of mechanisms. The skin of anguillid eels may absorb oxygen directly. The buccal cavity of the electric eel may breathe air. Catfish of the families Loricariidae, Callichthyidae, and Scoloplacidae absorb air through their digestive tracts.[34] Lungfish, with the exception of the Australian lungfish, and bichirs have paired lungs similar to those of tetrapods and must surface to gulp fresh air through the mouth and pass spent air out through the gills. Gar and bowfin have a vascularized swim bladder that functions in the same way. Loaches, trahiras, and many catfish breathe by passing air through the gut. Mudskippers breathe by absorbing oxygen across the skin (similar to frogs). A number of fish have evolved so-called accessory breathing organs that extract oxygen from the air. Labyrinth fish (such as gouramis and bettas) have a labyrinth organ above the gills that performs this function. A few other fish have structures resembling labyrinth organs in form and function, most notably snakeheads, pikeheads, and the Clariidae catfish family.
|
64 |
+
|
65 |
+
Breathing air is primarily of use to fish that inhabit shallow, seasonally variable waters where the water's oxygen concentration may seasonally decline. Fish dependent solely on dissolved oxygen, such as perch and cichlids, quickly suffocate, while air-breathers survive for much longer, in some cases in water that is little more than wet mud. At the most extreme, some air-breathing fish are able to survive in damp burrows for weeks without water, entering a state of aestivation (summertime hibernation) until water returns.
|
66 |
+
|
67 |
+
Air breathing fish can be divided into obligate air breathers and facultative air breathers. Obligate air breathers, such as the African lungfish, must breathe air periodically or they suffocate. Facultative air breathers, such as the catfish Hypostomus plecostomus, only breathe air if they need to and will otherwise rely on their gills for oxygen. Most air breathing fish are facultative air breathers that avoid the energetic cost of rising to the surface and the fitness cost of exposure to surface predators.[34]
|
68 |
+
|
69 |
+
Fish have a closed-loop circulatory system. The heart pumps the blood in a single loop throughout the body. In most fish, the heart consists of four parts, including two chambers and an entrance and exit.[35] The first part is the sinus venosus, a thin-walled sac that collects blood from the fish's veins before allowing it to flow to the second part, the atrium, which is a large muscular chamber. The atrium serves as a one-way antechamber, sends blood to the third part, ventricle. The ventricle is another thick-walled, muscular chamber and it pumps the blood, first to the fourth part, bulbus arteriosus, a large tube, and then out of the heart. The bulbus arteriosus connects to the aorta, through which blood flows to the gills for oxygenation.
|
70 |
+
|
71 |
+
Jaws allow fish to eat a wide variety of food, including plants and other organisms. Fish ingest food through the mouth and break it down in the esophagus. In the stomach, food is further digested and, in many fish, processed in finger-shaped pouches called pyloric caeca, which secrete digestive enzymes and absorb nutrients. Organs such as the liver and pancreas add enzymes and various chemicals as the food moves through the digestive tract. The intestine completes the process of digestion and nutrient absorption.
|
72 |
+
|
73 |
+
As with many aquatic animals, most fish release their nitrogenous wastes as ammonia. Some of the wastes diffuse through the gills. Blood wastes are filtered by the kidneys.
|
74 |
+
|
75 |
+
Saltwater fish tend to lose water because of osmosis. Their kidneys return water to the body. The reverse happens in freshwater fish: they tend to gain water osmotically. Their kidneys produce dilute urine for excretion. Some fish have specially adapted kidneys that vary in function, allowing them to move from freshwater to saltwater.
|
76 |
+
|
77 |
+
The scales of fish originate from the mesoderm (skin); they may be similar in structure to teeth.
|
78 |
+
|
79 |
+
Fish typically have quite small brains relative to body size compared with other vertebrates, typically one-fifteenth the brain mass of a similarly sized bird or mammal.[36] However, some fish have relatively large brains, most notably mormyrids and sharks, which have brains about as massive relative to body weight as birds and marsupials.[37]
|
80 |
+
|
81 |
+
Fish brains are divided into several regions. At the front are the olfactory lobes, a pair of structures that receive and process signals from the nostrils via the two olfactory nerves.[36] The olfactory lobes are very large in fish that hunt primarily by smell, such as hagfish, sharks, and catfish. Behind the olfactory lobes is the two-lobed telencephalon, the structural equivalent to the cerebrum in higher vertebrates. In fish the telencephalon is concerned mostly with olfaction.[36] Together these structures form the forebrain.
|
82 |
+
|
83 |
+
Connecting the forebrain to the midbrain is the diencephalon (in the diagram, this structure is below the optic lobes and consequently not visible). The diencephalon performs functions associated with hormones and homeostasis.[36] The pineal body lies just above the diencephalon. This structure detects light, maintains circadian rhythms, and controls color changes.[36]
|
84 |
+
|
85 |
+
The midbrain (or mesencephalon) contains the two optic lobes. These are very large in species that hunt by sight, such as rainbow trout and cichlids.[36]
|
86 |
+
|
87 |
+
The hindbrain (or metencephalon) is particularly involved in swimming and balance.[36] The cerebellum is a single-lobed structure that is typically the biggest part of the brain.[36] Hagfish and lampreys have relatively small cerebellae, while the mormyrid cerebellum is massive and apparently involved in their electrical sense.[36]
|
88 |
+
|
89 |
+
The brain stem (or myelencephalon) is the brain's posterior.[36] As well as controlling some muscles and body organs, in bony fish at least, the brain stem governs respiration and osmoregulation.[36]
|
90 |
+
|
91 |
+
Most fish possess highly developed sense organs. Nearly all daylight fish have color vision that is at least as good as a human's (see vision in fishes). Many fish also have chemoreceptors that are responsible for extraordinary senses of taste and smell. Although they have ears, many fish may not hear very well. Most fish have sensitive receptors that form the lateral line system, which detects gentle currents and vibrations, and senses the motion of nearby fish and prey.[38] Some fish, such as catfish and sharks, have the ampullae of Lorenzini, electroreceptors that detect weak electric currents on the order of millivolt.[39] Other fish, like the South American electric fishes Gymnotiformes, can produce weak electric currents, which they use in navigation and social communication.
|
92 |
+
|
93 |
+
Fish orient themselves using landmarks and may use mental maps based on multiple landmarks or symbols. Fish behavior in mazes reveals that they possess spatial memory and visual discrimination.[40]
|
94 |
+
|
95 |
+
Vision is an important sensory system for most species of fish. Fish eyes are similar to those of terrestrial vertebrates like birds and mammals, but have a more spherical lens. Their retinas generally have both rods and cones (for scotopic and photopic vision), and most species have colour vision. Some fish can see ultraviolet and some can see polarized light. Amongst jawless fish, the lamprey has well-developed eyes, while the hagfish has only primitive eyespots.[41] Fish vision shows adaptation to their visual environment, for example deep sea fishes have eyes suited to the dark environment.
|
96 |
+
|
97 |
+
Hearing is an important sensory system for most species of fish. Fish sense sound using their lateral lines and their ears.
|
98 |
+
|
99 |
+
New research has expanded preconceptions about the cognitive capacities of fish. For example, manta rays have exhibited behavior linked to self-awareness in mirror test cases. Placed in front of a mirror, individual rays engaged in contingency testing, that is, repetitive behavior aiming to check whether their reflection's behavior mimics their body movement.[42]
|
100 |
+
|
101 |
+
Wrasses have also passed the mirror test in a 2018 scientific study.[43][44]
|
102 |
+
|
103 |
+
Cases of tool use have also been noticed, notably in the Choerodon family, in archerfish and Atlantic cod.[45]
|
104 |
+
|
105 |
+
Experiments done by William Tavolga provide evidence that fish have pain and fear responses. For instance, in Tavolga's experiments, toadfish grunted when electrically shocked and over time they came to grunt at the mere sight of an electrode.[46]
|
106 |
+
|
107 |
+
In 2003, Scottish scientists at the University of Edinburgh and the Roslin Institute concluded that rainbow trout exhibit behaviors often associated with pain in other animals. Bee venom and acetic acid injected into the lips resulted in fish rocking their bodies and rubbing their lips along the sides and floors of their tanks, which the researchers concluded were attempts to relieve pain, similar to what mammals would do.[47][48] Neurons fired in a pattern resembling human neuronal patterns.[48]
|
108 |
+
|
109 |
+
Professor James D. Rose of the University of Wyoming claimed the study was flawed since it did not provide proof that fish possess "conscious awareness, particularly a kind of awareness that is meaningfully like ours".[49] Rose argues that since fish brains are so different from human brains, fish are probably not conscious in the manner humans are, so that reactions similar to human reactions to pain instead have other causes. Rose had published a study a year earlier arguing that fish cannot feel pain because their brains lack a neocortex.[50] However, animal behaviorist Temple Grandin argues that fish could still have consciousness without a neocortex because "different species can use different brain structures and systems to handle the same functions."[48]
|
110 |
+
|
111 |
+
Animal welfare advocates raise concerns about the possible suffering of fish caused by angling. Some countries, such as Germany have banned specific types of fishing, and the British RSPCA now formally prosecutes individuals who are cruel to fish.[51]
|
112 |
+
|
113 |
+
In 2019, scientists have shown that members of the monogamous species Amatitlania siquia exhibit pessimistic behavior when they are prevented from being with their partner.[52]
|
114 |
+
|
115 |
+
Most fish move by alternately contracting paired sets of muscles on either side of the backbone. These contractions form S-shaped curves that move down the body. As each curve reaches the back fin, backward force is applied to the water, and in conjunction with the fins, moves the fish forward. The fish's fins function like an airplane's flaps. Fins also increase the tail's surface area, increasing speed. The streamlined body of the fish decreases the amount of friction from the water. Since body tissue is denser than water, fish must compensate for the difference or they will sink. Many bony fish have an internal organ called a swim bladder that adjusts their buoyancy through manipulation of gases.
|
116 |
+
|
117 |
+
Although most fish are exclusively ectothermic, there are exceptions. The only known bony fishes (infraclass Teleostei) that exhibit endothermy are in the suborder Scombroidei – which includes the billfishes, tunas, and the butterfly kingfish, a basal species of mackerel[53] – and also the opah. The opah, a lampriform, was demonstrated in 2015 to utilize "whole-body endothermy", generating heat with its swimming muscles to warm its body while countercurrent exchange (as in respiration) minimizes heat loss.[54] It is able to actively hunt prey such as squid and swim for long distances due to the ability to warm its entire body, including its heart,[55] which is a trait typically found in only mammals and birds (in the form of homeothermy). In the cartilaginous fishes (class Chondrichthyes), sharks of the families Lamnidae (porbeagle, mackerel, salmon, and great white sharks) and Alopiidae (thresher sharks) exhibit endothermy. The degree of endothermy varies from the billfishes, which warm only their eyes and brain, to the bluefin tuna and the porbeagle shark, which maintain body temperatures in excess of 20 °C (68 °F) above ambient water temperatures.[53]
|
118 |
+
|
119 |
+
Endothermy, though metabolically costly, is thought to provide advantages such as increased muscle strength, higher rates of central nervous system processing, and higher rates of digestion.
|
120 |
+
|
121 |
+
|
122 |
+
|
123 |
+
Fish reproductive organs include testicles and ovaries. In most species, gonads are paired organs of similar size, which can be partially or totally fused.[56] There may also be a range of secondary organs that increase reproductive fitness.
|
124 |
+
|
125 |
+
In terms of spermatogonia distribution, the structure of teleosts testes has two types: in the most common, spermatogonia occur all along the seminiferous tubules, while in atherinomorph fish they are confined to the distal portion of these structures. Fish can present cystic or semi-cystic spermatogenesis in relation to the release phase of germ cells in cysts to the seminiferous tubules lumen.[56]
|
126 |
+
|
127 |
+
Fish ovaries may be of three types: gymnovarian, secondary gymnovarian or cystovarian. In the first type, the oocytes are released directly into the coelomic cavity and then enter the ostium, then through the oviduct and are eliminated. Secondary gymnovarian ovaries shed ova into the coelom from which they go directly into the oviduct. In the third type, the oocytes are conveyed to the exterior through the oviduct.[57] Gymnovaries are the primitive condition found in lungfish, sturgeon, and bowfin. Cystovaries characterize most teleosts, where the ovary lumen has continuity with the oviduct.[56] Secondary gymnovaries are found in salmonids and a few other teleosts.
|
128 |
+
|
129 |
+
Oogonia development in teleosts fish varies according to the group, and the determination of oogenesis dynamics allows the understanding of maturation and fertilization processes. Changes in the nucleus, ooplasm, and the surrounding layers characterize the oocyte maturation process.[56]
|
130 |
+
|
131 |
+
Postovulatory follicles are structures formed after oocyte release; they do not have endocrine function, present a wide irregular lumen, and are rapidly reabsorbed in a process involving the apoptosis of follicular cells. A degenerative process called follicular atresia reabsorbs vitellogenic oocytes not spawned. This process can also occur, but less frequently, in oocytes in other development stages.[56]
|
132 |
+
|
133 |
+
Some fish, like the California sheephead, are hermaphrodites, having both testes and ovaries either at different phases in their life cycle or, as in hamlets, have them simultaneously.
|
134 |
+
|
135 |
+
Over 97% of all known fish are oviparous,[58] that is, the eggs develop outside the mother's body. Examples of oviparous fish include salmon, goldfish, cichlids, tuna, and eels. In the majority of these species, fertilisation takes place outside the mother's body, with the male and female fish shedding their gametes into the surrounding water. However, a few oviparous fish practice internal fertilization, with the male using some sort of intromittent organ to deliver sperm into the genital opening of the female, most notably the oviparous sharks, such as the horn shark, and oviparous rays, such as skates. In these cases, the male is equipped with a pair of modified pelvic fins known as claspers.
|
136 |
+
|
137 |
+
Marine fish can produce high numbers of eggs which are often released into the open water column. The eggs have an average diameter of 1 millimetre (0.04 in).
|
138 |
+
|
139 |
+
Egg of lamprey
|
140 |
+
|
141 |
+
Egg of catshark (mermaids' purse)
|
142 |
+
|
143 |
+
Egg of bullhead shark
|
144 |
+
|
145 |
+
Egg of chimaera
|
146 |
+
|
147 |
+
The newly hatched young of oviparous fish are called larvae. They are usually poorly formed, carry a large yolk sac (for nourishment), and are very different in appearance from juvenile and adult specimens. The larval period in oviparous fish is relatively short (usually only several weeks), and larvae rapidly grow and change appearance and structure (a process termed metamorphosis) to become juveniles. During this transition larvae must switch from their yolk sac to feeding on zooplankton prey, a process which depends on typically inadequate zooplankton density, starving many larvae.
|
148 |
+
|
149 |
+
In ovoviviparous fish the eggs develop inside the mother's body after internal fertilization but receive little or no nourishment directly from the mother, depending instead on the yolk. Each embryo develops in its own egg. Familiar examples of ovoviviparous fish include guppies, angel sharks, and coelacanths.
|
150 |
+
|
151 |
+
Some species of fish are viviparous. In such species the mother retains the eggs and nourishes the embryos. Typically, viviparous fish have a structure analogous to the placenta seen in mammals connecting the mother's blood supply with that of the embryo. Examples of viviparous fish include the surf-perches, splitfins, and lemon shark. Some viviparous fish exhibit oophagy, in which the developing embryos eat other eggs produced by the mother. This has been observed primarily among sharks, such as the shortfin mako and porbeagle, but is known for a few bony fish as well, such as the halfbeak Nomorhamphus ebrardtii.[59] Intrauterine cannibalism is an even more unusual mode of vivipary, in which the largest embryos eat weaker and smaller siblings. This behavior is also most commonly found among sharks, such as the grey nurse shark, but has also been reported for Nomorhamphus ebrardtii.[59]
|
152 |
+
|
153 |
+
Aquarists commonly refer to ovoviviparous and viviparous fish as livebearers.
|
154 |
+
|
155 |
+
Acoustic communication in fish involves the transmission of acoustic signals from one individual of a species to another. The production of sounds as a means of communication among fish is most often used in the context of feeding, aggression or courtship behaviour.[3]
|
156 |
+
The sounds emitted can vary depending on the species and stimulus involved. Fish can produce either stridulatory sounds by moving components of the skeletal system, or can produce non-stridulatory sounds by manipulating specialized organs such as the swimbladder.[4]
|
157 |
+
|
158 |
+
There are some species of fish that can produce sounds by rubbing or grinding their bones together. These noises produced by bone-on-bone interactions are known as 'stridulatory sounds'.[4]
|
159 |
+
|
160 |
+
An example of this is seen in Haemulon flavolineatum, a species commonly referred to as the 'French grunt fish', as it produces a grunting noise by grinding its teeth together.[4]
|
161 |
+
This behaviour is most pronounced when the H. flavolineatum is in distress situations.[4] The grunts produced by this species of fishes generate a frequency of approximately 700 Hz, and last approximately 47 milliseconds.[4] The H. flavolineatum does not emit sounds with frequencies greater than 1000 Hz, and does not detect sounds that have frequencies greater than 1050 Hz.[4]
|
162 |
+
|
163 |
+
In a study conducted by Oliveira et al. (2014), the longsnout seahorse, Hippocampus reidi, was recorded producing two different categories of sounds; ‘clicks’ and ‘growls’. The sounds emitted by the H. reidi are accomplished by rubbing their coronet bone across the grooved section of their neurocranium.[60]
|
164 |
+
‘Clicking’ sounds were found to be primarily produced during courtship and feeding, and the frequencies of clicks were within the range of 50 Hz-800 Hz.[61] The frequencies were noted to be on the higher end of the range during spawning periods, when the female and male fishes were less than fifteen centimeters apart.[61] Growl sounds were produced when the H. reidi encountered stressful situations, such as handling by researchers.[61] The ‘growl’ sounds consist of a series of sound pulses and are emitted simultaneously with body vibrations.[61]
|
165 |
+
|
166 |
+
Some fish species create noise by engaging specialized muscles that contract and cause swimbladder vibrations.
|
167 |
+
|
168 |
+
Oyster toadfish produce loud grunting sounds by contracting muscles located along the sides of their swim bladder, known as sonic muscles[62]
|
169 |
+
Female and male toadfishes emit short-duration grunts, often as a fright response.[63] In addition to short-duration grunts, male toadfishes produce “boat whistle calls”.[64] These calls are longer in duration, lower in frequency, and are primarily used to attract mates.[64]
|
170 |
+
The sounds emitted by the O. tao have frequency range of 140 Hz to 260 Hz.[64] The frequencies of the calls depend on the rate at which the sonic muscles contract.[65][62]
|
171 |
+
|
172 |
+
The red drum, Sciaenops ocellatus, produces drumming sounds by vibrating its swimbladder.[66] Vibrations are caused by the rapid contraction of sonic muscles that surround the dorsal aspect of the swimbladder.[66] These vibrations result in repeated sounds with frequencies that range from 100 to >200 Hz.[66] The S. Ocellatus can produce different calls depending on the stimuli involved.[66] The sounds created in courtship situations are different from those made during distressing events such as predatorial attacks.[66] Unlike the males of the S. Ocellatus species, the females of this species don't produce sounds and lack sound-producing (sonic) muscles.[66]
|
173 |
+
|
174 |
+
Like other animals, fish suffer from diseases and parasites. To prevent disease they have a variety of defenses. Non-specific defenses include the skin and scales, as well as the mucus layer secreted by the epidermis that traps and inhibits the growth of microorganisms. If pathogens breach these defenses, fish can develop an inflammatory response that increases blood flow to the infected region and delivers white blood cells that attempt to destroy pathogens. Specific defenses respond to particular pathogens recognised by the fish's body, i.e., an immune response.[67] In recent years, vaccines have become widely used in aquaculture and also with ornamental fish, for example furunculosis vaccines in farmed salmon and koi herpes virus in koi.[68][69]
|
175 |
+
|
176 |
+
Some species use cleaner fish to remove external parasites. The best known of these are the Bluestreak cleaner wrasses of the genus Labroides found on coral reefs in the Indian and Pacific oceans. These small fish maintain so-called "cleaning stations" where other fish congregate and perform specific movements to attract the attention of the cleaners.[70] Cleaning behaviors have been observed in a number of fish groups, including an interesting case between two cichlids of the same genus, Etroplus maculatus, the cleaner, and the much larger Etroplus suratensis.[71]
|
177 |
+
|
178 |
+
Immune organs vary by type of fish.[72]
|
179 |
+
In the jawless fish (lampreys and hagfish), true lymphoid organs are absent. These fish rely on regions of lymphoid tissue within other organs to produce immune cells. For example, erythrocytes, macrophages and plasma cells are produced in the anterior kidney (or pronephros) and some areas of the gut (where granulocytes mature.) They resemble primitive bone marrow in hagfish.
|
180 |
+
Cartilaginous fish (sharks and rays) have a more advanced immune system. They have three specialized organs that are unique to Chondrichthyes; the epigonal organs (lymphoid tissue similar to mammalian bone) that surround the gonads, the Leydig's organ within the walls of their esophagus, and a spiral valve in their intestine. These organs house typical immune cells (granulocytes, lymphocytes and plasma cells). They also possess an identifiable thymus and a well-developed spleen (their most important immune organ) where various lymphocytes, plasma cells and macrophages develop and are stored.
|
181 |
+
Chondrostean fish (sturgeons, paddlefish, and bichirs) possess a major site for the production of granulocytes within a mass that is associated with the meninges (membranes surrounding the central nervous system.) Their heart is frequently covered with tissue that contains lymphocytes, reticular cells and a small number of macrophages. The chondrostean kidney is an important hemopoietic organ; where erythrocytes, granulocytes, lymphocytes and macrophages develop.
|
182 |
+
|
183 |
+
Like chondrostean fish, the major immune tissues of bony fish (or teleostei) include the kidney (especially the anterior kidney), which houses many different immune cells.[73] In addition, teleost fish possess a thymus, spleen and scattered immune areas within mucosal tissues (e.g. in the skin, gills, gut and gonads). Much like the mammalian immune system, teleost erythrocytes, neutrophils and granulocytes are believed to reside in the spleen whereas lymphocytes are the major cell type found in the thymus.[74][75] In 2006, a lymphatic system similar to that in mammals was described in one species of teleost fish, the zebrafish. Although not confirmed as yet, this system presumably will be where naive (unstimulated) T cells accumulate while waiting to encounter an antigen.[76]
|
184 |
+
|
185 |
+
B and T lymphocytes bearing immunoglobulins and T cell receptors, respectively, are found in all jawed fishes. Indeed, the adaptive immune system as a whole evolved in an ancestor of all jawed vertebrate.[77]
|
186 |
+
|
187 |
+
The 2006 IUCN Red List names 1,173 fish species that are threatened with extinction.[78] Included are species such as Atlantic cod,[79] Devil's Hole pupfish,[80] coelacanths,[81] and great white sharks.[82] Because fish live underwater they are more difficult to study than terrestrial animals and plants, and information about fish populations is often lacking. However, freshwater fish seem particularly threatened because they often live in relatively small water bodies. For example, the Devil's Hole pupfish occupies only a single 3 by 6 metres (10 by 20 ft) pool.[83]
|
188 |
+
|
189 |
+
Overfishing is a major threat to edible fish such as cod and tuna.[84][85] Overfishing eventually causes population (known as stock) collapse because the survivors cannot produce enough young to replace those removed. Such commercial extinction does not mean that the species is extinct, merely that it can no longer sustain a fishery.
|
190 |
+
|
191 |
+
One well-studied example of fishery collapse is the Pacific sardine Sadinops sagax caerulues fishery off the California coast. From a 1937 peak of 790,000 long tons (800,000 t) the catch steadily declined to only 24,000 long tons (24,000 t) in 1968, after which the fishery was no longer economically viable.[86]
|
192 |
+
|
193 |
+
The main tension between fisheries science and the fishing industry is that the two groups have different views on the resiliency of fisheries to intensive fishing. In places such as Scotland, Newfoundland, and Alaska the fishing industry is a major employer, so governments are predisposed to support it.[87][88] On the other hand, scientists and conservationists push for stringent protection, warning that many stocks could be wiped out within fifty years.[89][90]
|
194 |
+
|
195 |
+
A key stress on both freshwater and marine ecosystems is habitat degradation including water pollution, the building of dams, removal of water for use by humans, and the introduction of exotic species.[91] An example of a fish that has become endangered because of habitat change is the pallid sturgeon, a North American freshwater fish that lives in rivers damaged by human activity.[92]
|
196 |
+
|
197 |
+
Introduction of non-native species has occurred in many habitats. One of the best studied examples is the introduction of Nile perch into Lake Victoria in the 1960s. Nile perch gradually exterminated the lake's 500 endemic cichlid species. Some of them survive now in captive breeding programmes, but others are probably extinct.[93] Carp, snakeheads,[94] tilapia, European perch, brown trout, rainbow trout, and sea lampreys are other examples of fish that have caused problems by being introduced into alien environments.
|
198 |
+
|
199 |
+
Throughout history, humans have utilized fish as a food source. Historically and today, most fish protein has come by means of catching wild fish. However, aquaculture, or fish farming, which has been practiced since about 3,500 BCE. in China,[95] is becoming increasingly important in many nations. Overall, about one-sixth of the world's protein is estimated to be provided by fish.[96] That proportion is considerably elevated in some developing nations and regions heavily dependent on the sea. In a similar manner, fish have been tied to trade.
|
200 |
+
|
201 |
+
Catching fish for the purpose of food or sport is known as fishing, while the organized effort by humans to catch fish is called a fishery. Fisheries are a huge global business and provide income for millions of people.[96] The annual yield from all fisheries worldwide is about 154 million tons,[97] with popular species including herring, cod, anchovy, tuna, flounder, and salmon. However, the term fishery is broadly applied, and includes more organisms than just fish, such as mollusks and crustaceans, which are often called "fish" when used as food.
|
202 |
+
|
203 |
+
Fish have been recognized as a source of beauty for almost as long as used for food, appearing in cave art, being raised as ornamental fish in ponds, and displayed in aquariums in homes, offices, or public settings.
|
204 |
+
|
205 |
+
Recreational fishing is fishing primarily for pleasure or competition; it can be contrasted with commercial fishing, which is fishing for profit, or subsistence fishing, which is fishing primarily for food. The most common form of recreational fishing is done with a rod, reel, line, hooks, and any one of a wide range of baits. Recreational fishing is particularly popular in North America and Europe and state, provincial, and federal government agencies actively management target fish species.[98][99] Angling is a method of fishing, specifically the practice of catching fish by means of an "angle" (hook). Anglers must select the right hook, cast accurately, and retrieve at the right speed while considering water and weather conditions, species, fish response, time of the day, and other factors.
|
206 |
+
|
207 |
+
Fish themes have symbolic significance in many religions. In ancient Mesopotamia, fish offerings were made to the gods from the very earliest times.[100] Fish were also a major symbol of Enki, the god of water.[100] Fish frequently appear as filling motifs in cylinder seals from the Old Babylonian (c. 1830 BC – c. 1531 BC) and Neo-Assyrian (911–609 BC) periods.[100] Starting during the Kassite Period (c. 1600 BC – c. 1155 BC) and lasting until the early Persian Period (550–30 BC), healers and exorcists dressed in ritual garb resembling the bodies of fish.[100] During the Seleucid Period (312–63 BC), the legendary Babylonian culture hero Oannes, described by Berossus, was said to have dressed in the skin of a fish.[100] Fish were sacred to the Syrian goddess Atargatis[101] and, during her festivals, only her priests were permitted to eat them.[101]
|
208 |
+
|
209 |
+
In the Book of Jonah, a work of Jewish literature probably written in the fourth century BC, the central figure, a prophet named Jonah, is swallowed by a giant fish after being thrown overboard by the crew of the ship he is travelling on.[103][104][105] The fish later vomits Jonah out on shore after three days.[103][104][105] This book was later included as part of the Hebrew Bible, or Christian Old Testament,[106][107] and a version of the story it contains is summarized in Surah 37:139-148 of the Quran.[108] Early Christians used the ichthys, a symbol of a fish, to represent Jesus,[101][102] because the Greek word for fish, ΙΧΘΥΣ Ichthys, could be used as an acronym for "Ίησοῦς Χριστός, Θεοῦ Υἱός, Σωτήρ" (Iesous Christos, Theou Huios, Soter), meaning "Jesus Christ, Son of God, Saviour".[101][102] The gospels also refer to "fishers of men"[109] and feeding the multitude. In the dhamma of Buddhism, the fish symbolize happiness as they have complete freedom of movement in the water. Often drawn in the form of carp which are regarded in the Orient as sacred on account of their elegant beauty, size and life-span.
|
210 |
+
|
211 |
+
Among the deities said to take the form of a fish are Ika-Roa of the Polynesians, Dagon of various ancient Semitic peoples, the shark-gods of Hawaiʻi and Matsya of the Hindus. The astrological symbol Pisces is based on a constellation of the same name, but there is also a second fish constellation in the night sky, Piscis Austrinus.[110]
|
212 |
+
|
213 |
+
Fish feature prominently in art and literature, in movies such as Finding Nemo and books such as The Old Man and the Sea. Large fish, particularly sharks, have frequently been the subject of horror movies and thrillers, most notably the novel Jaws, which spawned a series of films of the same name that in turn inspired similar films or parodies such as Shark Tale and Snakehead Terror. Piranhas are shown in a similar light to sharks in films such as Piranha; however, contrary to popular belief, the red-bellied piranha is actually a generally timid scavenger species that is unlikely to harm humans.[111] Legends of half-human, half-fish mermaids have featured in folklore, including the stories of Hans Christian Andersen.
|
214 |
+
|
215 |
+
Though often used interchangeably, in biology these words have different meanings. Fish is used as a singular noun, or as a plural to describe multiple individuals from a single species. Fishes is used to describe different species or species groups.[112][113][114] Thus a pond would be said to contain 120 fish if all were from a single species or 120 fishes if these included a mix of several species. The distinction is similar to that between people and peoples.
|
216 |
+
|
217 |
+
A random assemblage of fish merely using some localised resource such as food or nesting sites is known simply as an aggregation. When fish come together in an interactive, social grouping, then they may be forming either a shoal or a school depending on the degree of organisation. A shoal is a loosely organised group where each fish swims and forages independently but is attracted to other members of the group and adjusts its behaviour, such as swimming speed, so that it remains close to the other members of the group. Schools of fish are much more tightly organised, synchronising their swimming so that all fish move at the same speed and in the same direction. Shoaling and schooling behaviour is believed to provide a variety of advantages.[116]
|
218 |
+
|
219 |
+
Examples:
|
220 |
+
|
221 |
+
While the words "school" and "shoal" have different meanings within biology, the distinctions are often ignored by non-specialists who treat the words as synonyms. Thus speakers of British English commonly use "shoal" to describe any grouping of fish, and speakers of American English commonly use "school" just as loosely.[117]
|
222 |
+
|
en/4691.html.txt
ADDED
@@ -0,0 +1,87 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
The goldfish (Carassius auratus) is a freshwater fish in the family Cyprinidae of order Cypriniformes. It is one of the most commonly kept aquarium fish.
|
6 |
+
|
7 |
+
A relatively small member of the carp family (which also includes the Prussian carp and the crucian carp), the goldfish is native to East Asia. It was first selectively bred in ancient China more than 1,000 years ago, and several distinct breeds have since been developed. Goldfish breeds vary greatly in size, body shape, fin configuration and coloration (various combinations of white, yellow, orange, red, brown, and black are known).
|
8 |
+
|
9 |
+
Starting in ancient China, various species of carp (collectively known as Asian carp) have been bred and reared as food fish for thousands of years. Some of these normally gray or silver species have a tendency to produce red, orange or yellow color mutations; this was first recorded during the Jin dynasty (AD 265–420).[4][5]
|
10 |
+
|
11 |
+
During the Tang dynasty (AD 618–907), it was popular to raise carp in ornamental ponds and water gardens. A natural genetic mutation produced gold (actually yellowish orange) rather than silver coloration. People began to breed the gold variety instead of the silver variety, keeping them in ponds or other bodies of water. On special occasions at which guests were expected, they would be moved to a much smaller container for display.[6][7]
|
12 |
+
|
13 |
+
By the Song dynasty (AD 960–1279), the selective domestic breeding of goldfish was firmly established.[8] In 1162, the empress of the Song Dynasty ordered the construction of a pond to collect the red and gold variety. By this time, people outside the imperial family were forbidden to keep goldfish of the gold (yellow) variety, yellow being the imperial color. This is probably the reason why there are more orange goldfish than yellow goldfish, even though the latter are genetically easier to breed.[9] The occurrence of other colors (apart from red and gold) was first recorded in 1276.[citation needed]
|
14 |
+
|
15 |
+
During the Ming dynasty (1368–1644), goldfish also began to be raised indoors,[5] which permitted selection for mutations that would not be able to survive in ponds.[6] The first occurrence of fancy-tailed goldfish was recorded in the Ming Dynasty. In 1603, goldfish were introduced to Japan.[6] In 1611, goldfish were introduced to Portugal and from there to other parts of Europe.[6]
|
16 |
+
|
17 |
+
During the 1620s, goldfish were highly regarded in southern Europe because of their metallic scales, and symbolized good luck and fortune. It became a tradition for married men to give their wives a goldfish on their first anniversary, as a symbol for the prosperous years to come. This tradition quickly died, as goldfish became more available, losing their status. Goldfish were first introduced to North America around 1850 and quickly became popular in the United States.[10][11]
|
18 |
+
|
19 |
+
As of April 2008, the largest goldfish in the world was believed by the BBC to measure 19 inches (48 cm), and to be living in the Netherlands.[12] At the time, a goldfish named "Goldie", kept as a pet in a tank in Folkestone, England, was measured as 15 inches (38 cm) and over 2 pounds (0.91 kg), and named as the second largest in the world behind the Netherlands fish.[12] The secretary of the Federation of British Aquatic Societies (FBAS) stated of Goldie's size, "I would think there are probably a few bigger goldfish that people don't think of as record holders, perhaps in ornamental lakes".[12] In July 2010, a goldfish measuring 16 inches (41 cm) and 5 pounds (2.3 kg) was caught in a pond in Poole, England, thought to have been abandoned there after outgrowing a tank.[13]
|
20 |
+
|
21 |
+
Goldfish have one of the most studied senses of vision in fishes.[14] Goldfish have four kinds of cone cells, which are respectively sensitive to different colors: red, green, blue and ultraviolet. The ability to distinguish between four different primary colors classifies them as tetrachromats.[15]
|
22 |
+
|
23 |
+
Goldfish have one of the most studied senses of hearing in fish.[16] They have two otoliths, permitting the detection of sound particle motion, and Weberian ossicles connecting the swimbladder to the otoliths, facilitating the detection of sound pressure.[17]
|
24 |
+
|
25 |
+
Goldfish have strong associative learning abilities, as well as social learning skills. In addition, their visual acuity allows them to distinguish between individual humans. Owners may notice that fish react favorably to them (swimming to the front of the glass, swimming rapidly around the tank, and going to the surface mouthing for food) while hiding when other people approach the tank. Over time, goldfish learn to associate their owners and other humans with food, often "begging" for food whenever their owners approach.[citation needed]
|
26 |
+
|
27 |
+
Goldfish that have constant visual contact with humans also stop considering them to be a threat. After being kept in a tank for several weeks, sometimes months, it becomes possible to feed a goldfish by hand without it shying away.
|
28 |
+
|
29 |
+
Goldfish have a memory-span of at least three months and can distinguish between different shapes, colors and sounds.[18][19] By using positive reinforcement, goldfish can be trained to recognize and to react to light signals of different colors[20] or to perform tricks.[21] Fish respond to certain colors most evidently in relation to feeding.[citation needed] Fish learn to anticipate feedings provided they occur at around the same time every day.
|
30 |
+
|
31 |
+
Goldfish are gregarious, displaying schooling behavior, as well as displaying the same types of feeding behaviors. Goldfish may display similar behaviors when responding to their reflections in a mirror.[citation needed]
|
32 |
+
|
33 |
+
Goldfish have learned behaviors, both as groups and as individuals, that stem from native carp behavior. They are a generalist species with varied feeding, breeding, and predator avoidance behaviors that contribute to their success. As fish, they can be described as "friendly" towards each other. Very rarely does a goldfish harm another goldfish, nor do the males harm the females during breeding. The only real threat that goldfish present to each other is competing for food. Commons, comets, and other faster varieties can easily eat all the food during a feeding before fancy varieties can reach it. This can lead to stunted growth or possible starvation of fancier varieties when they are kept in a pond with their single-tailed brethren. As a result, care should be taken to combine only breeds with similar body type and swim characteristics.
|
34 |
+
|
35 |
+
In the wild, the diet of goldfish consists of crustaceans, insects, and various plant matter. Like most fish, they are opportunistic feeders and do not stop eating on their own accord. Overfeeding can be deleterious to their health, typically by blocking the intestines. This happens most often with selectively bred goldfish, which have a convoluted intestinal tract. When excess food is available, they produce more waste and feces, partly due to incomplete protein digestion. Overfeeding can sometimes be diagnosed by observing feces trailing from the fish's cloaca.
|
36 |
+
|
37 |
+
Goldfish-specific food has less protein and more carbohydrate than conventional fish food. Enthusiasts may supplement this diet with shelled peas (with outer skins removed), blanched green leafy vegetables, and bloodworms. Young goldfish benefit from the addition of brine shrimp to their diet. As with all animals, goldfish preferences vary.
|
38 |
+
|
39 |
+
Goldfish may only grow to sexual maturity with enough water and the right nutrition. Most goldfish breed in captivity, particularly in pond settings. Breeding usually happens after a significant temperature change, often in spring. Males chase gravid female goldfish (females carrying eggs), and prompt them to release their eggs by bumping and nudging them.
|
40 |
+
|
41 |
+
Goldfish, like all cyprinids, are egg-layers. Their eggs are adhesive and attach to aquatic vegetation, typically dense plants such as Cabomba or Elodea or a spawning mop. The eggs hatch within 48 to 72 hours.
|
42 |
+
|
43 |
+
Within a week or so, the fry begins to assume its final shape, although a year may pass before they develop a mature goldfish color; until then they are a metallic brown like their wild ancestors. In their first weeks of life, the fry grow quickly—an adaptation born of the high risk of getting devoured by the adult goldfish (or other fish and insects) in their environment.[22]
|
44 |
+
|
45 |
+
Some highly selectively bred goldfish can no longer breed naturally due to their altered shape. The artificial breeding method called "hand stripping" can assist nature, but can harm the fish if not done correctly. In captivity, adults may also eat young that they encounter.
|
46 |
+
|
47 |
+
Breeding goldfish by the hobbyist is the process of selecting adult fish to reproduce, allowing them to reproduce and then raising the resulting offspring while continually removing fish that do not approach the desired pedigree.[23]
|
48 |
+
|
49 |
+
The market for live goldfish and other crucian carp usually imported from China was $1.2 million in 2018. Some high quality varieties cost between $125 to $300.[24]
|
50 |
+
|
51 |
+
Selective breeding over centuries has produced several color variations, some of them far removed from the "golden" color of the original fish. There are also different body shapes, and fin and eye configurations. Some extreme versions of the goldfish live only in aquariums—they are much less hardy than varieties closer to the "wild" original. However, some variations are hardier, such as the Shubunkin. Currently, there are about 300 breeds recognized in China.[5] The vast majority of goldfish breeds today originated from China.[5] Some of the main varieties are:
|
52 |
+
|
53 |
+
Chinese tradition classifies goldfish into four main types.[32] These classifications are not commonly used in the West.
|
54 |
+
|
55 |
+
Where it was believed for some time that Prussian carp (Carassius gibelio) were the closest wild relative of the goldfish.[33][34] modern genetic sequencing has proven otherwise.[35] C. auratus are differentiated from other Carassius species by several characteristics. C. auratus have a more pointed snout, while the snout of C. carassius is well rounded. C. gibelio often has a grayish/greenish color, while crucian carp are always golden bronze. Juvenile crucian carp have a black spot on the base of the tail, which disappears with age. In C. auratus, this tail spot is never present. C. auratus have fewer than 31 scales along the lateral line, while crucian carp have 33 scales or more.
|
56 |
+
|
57 |
+
Like their wild ancestors, common and comet goldfish as well as Shubunkin can survive, and even thrive, in any climate that can support a pond, whereas fancy goldfish are unlikely to survive in the wild as their bright colors and long fins make them easy prey. Goldfish can hybridize with certain other Carassius as well as other species of carp. Within three breeding generations, the vast majority of the hybrid spawn revert to the wild type color. Koi and common carp may also interbreed with goldfish to produce sterile hybrids.
|
58 |
+
|
59 |
+
Like most species in the carp family, goldfish produce a large amount of waste both in their feces and through their gills, releasing harmful chemicals into the water. Build-up of this waste to toxic levels can occur in a relatively short period of time, and can easily cause a goldfish's death. For common and comet varieties, each goldfish should have about 20 US gallons (76 l; 17 imp gal) of water. Fancy goldfish (which are smaller) should have about 10 US gallons (38 l; 8.3 imp gal) per goldfish. The water surface area determines how much oxygen diffuses and dissolves into the water. A general rule is have 1 square foot (0.093 m2). Active aeration by way of a water pump, filter or fountain effectively increases the surface area.[citation needed]
|
60 |
+
|
61 |
+
The goldfish is classified as a coldwater fish, and can live in unheated aquaria at a temperature comfortable for humans. However, rapid changes in temperature (for example in an office building in winter when the heat is turned off at night) can kill them, especially if the tank is small. Care must also be taken when adding water, as the new water may be of a different temperature. Temperatures under about 10 °C (50 °F) are dangerous to fancy varieties, though commons and comets can survive slightly lower temperatures. Extremely high temperatures (over 30 °C (86 °F) can also harm goldfish. However, higher temperatures may help fight protozoan infestations by accelerating the parasite's life-cycle—thus eliminating it more quickly. The optimum temperature for goldfish is between 20 °C (68 °F) and 22 °C (72 °F).[36]
|
62 |
+
|
63 |
+
Like all fish, goldfish do not like to be petted. In fact, touching a goldfish can endanger its health, because it can cause the protective slime coat to be damaged or removed, exposing the fish's skin to infection from bacteria or water-born parasites. However, goldfish respond to people by surfacing at feeding time, and can be trained or acclimated to taking pellets or flakes from human fingers. The reputation of goldfish dying quickly is often due to poor care.[37] The lifespan of goldfish in captivity can extend beyond 10 years.[38]
|
64 |
+
|
65 |
+
If left in the dark for a period of time, goldfish gradually change color until they are almost gray.[citation needed] Goldfish produce pigment in response to light, in a similar manner to how human skin becomes tanned in the sun. Fish have cells called chromatophores that produce pigments which reflect light, and give the fish coloration. The color of a goldfish is determined by which pigments are in the cells, how many pigment molecules there are, and whether the pigment is grouped inside the cell or is spaced throughout the cytoplasm.[citation needed]
|
66 |
+
|
67 |
+
Because goldfish eat live plants, their presence in a planted aquarium can be problematic. Only a few aquarium plant species (for example Cryptocoryne and Anubias) can survive around goldfish, but they require special attention so that they are not uprooted. Plastic plants are more durable.[citation needed]
|
68 |
+
|
69 |
+
Goldfish are popular pond fish, since they are small, inexpensive, colorful and very hardy. In an outdoor pond or water garden, they may even survive for brief periods if ice forms on the surface, as long as there is enough oxygen remaining in the water and the pond does not freeze solid. Common, London and Bristol shubunkins, jikin, wakin, comet and some hardier fantail goldfish can be kept in a pond all year round in temperate and subtropical climates. Moor, veiltail, oranda and lionhead can be kept safely in outdoor ponds year-round only in more tropical climates and only in summer elsewhere.
|
70 |
+
|
71 |
+
Ponds small and large are fine in warmer areas (although it ought to be noted that goldfish can "overheat" in small volumes of water in the summer in tropical climates). In frosty climes, the depth should be at least 80 centimeters (31 in) to preclude freezing. During winter, goldfish become sluggish, stop eating and often stay on the bottom of the pond. This is normal; they become active again in the spring. Unless the pond is large enough to maintain its own ecosystem without interference from humans, a filter is important to clear waste and keep the pond clean. Plants are essential as they act as part of the filtration system, as well as a food source for the fish. Plants are further beneficial since they raise oxygen levels in the water.
|
72 |
+
|
73 |
+
Compatible fish include rudd, tench, orfe and koi, but the latter require specialized care. Ramshorn snails are helpful by eating any algae that grows in the pond. Without some form of animal population control, goldfish ponds can easily become overstocked. Fish such as orfe consume goldfish eggs.
|
74 |
+
|
75 |
+
Like some other popular aquarium fish, such as the guppy, goldfish and other carp are frequently added to stagnant bodies of water to reduce mosquito populations. They are used to prevent the spread of West Nile virus, which relies on mosquitoes to migrate. However, introducing goldfish has often had negative consequences for local ecosystems.[39]
|
76 |
+
|
77 |
+
Fishbowls are detrimental to the health of goldfish and are prohibited by animal welfare legislation in several municipalities.[40][41] The practice of using bowls as permanent fish housing originated from a misunderstanding of Chinese "display" vessels: goldfish which were normally housed in ponds were, on occasion, temporarily displayed in smaller containers to be better admired by guests.[6]
|
78 |
+
|
79 |
+
Goldfish kept in bowls or "mini-aquariums" suffer from death, disease, and stunting, due primarily to the low oxygen and very high ammonia/nitrite levels inherent in such an environment.[42] In comparison to other common aquarium fish, goldfish have high oxygen needs and produce a large amount of waste; therefore they require a substantial volume of well-filtered water to thrive. In addition, all goldfish varieties have the potential to reach 5 inches (12.7 cm) in total length, with single-tailed breeds often exceeding one foot (30.5 cm). Single-tailed varieties include common and comet goldfish.
|
80 |
+
|
81 |
+
In many countries, carnival and fair operators commonly give goldfish away in plastic bags as prizes. In late 2005 Rome banned the use of goldfish and other animals as carnival prizes. Rome has also banned the use of "goldfish bowls", on animal cruelty grounds,[40] as well as Monza, Italy, in 2004.[41] In the United Kingdom, the government proposed banning this practice as part of its Animal Welfare Bill,[43][44] though this has since been amended to only prevent goldfish being given as prizes to unaccompanied minors.[45]
|
82 |
+
|
83 |
+
In Japan, during summer festivals and religious holidays (ennichi), a traditional game called goldfish scooping is played, in which a player scoops goldfish from a basin with a special scooper. Sometimes bouncy balls are substituted for goldfish.
|
84 |
+
|
85 |
+
Although edible and closely related to some fairly widely eaten species, goldfish are rarely eaten. A fad among American college students for many years was swallowing goldfish as a stunt and as a fraternity initiation process. The first recorded instance was in 1939 at Harvard University.[46] The practice gradually fell out of popularity over the course of several decades and is rarely practiced today.
|
86 |
+
|
87 |
+
In Iran and among the international Iranian diaspora, goldfish are a traditional part of Nowruz celebrations. Some animal advocates have called for boycotts of goldfish purchases, citing industrial farming and low survival rates of the fish.[47][48]
|
en/4692.html.txt
ADDED
@@ -0,0 +1,222 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
Fish are gill-bearing aquatic craniate animals that lack limbs with digits. They form a sister group to the tunicates, together forming the olfactores. Included in this definition are the living hagfish, lampreys, and cartilaginous and bony fish as well as various extinct related groups.
|
6 |
+
|
7 |
+
The earliest organisms that can be classified as fish were soft-bodied chordates that first appeared during the Cambrian period. Although they lacked a true spine, they possessed notochords which allowed them to be more agile than their invertebrate counterparts. Fish would continue to evolve through the Paleozoic era, diversifying into a wide variety of forms. Many fish of the Paleozoic developed external armor that protected them from predators. The first fish with jaws appeared in the Silurian period, after which many (such as sharks) became formidable marine predators rather than just the prey of arthropods.
|
8 |
+
|
9 |
+
Most fish are ectothermic ("cold-blooded"), allowing their body temperatures to vary as ambient temperatures change, though some of the large active swimmers like white shark and tuna can hold a higher core temperature.[1][2]
|
10 |
+
|
11 |
+
Fish can communicate in their underwater environments through the use of acoustic communication. Acoustic communication in fish involves the transmission of acoustic signals from one individual of a species to another. The production of sounds as a means of communication among fish is most often used in the context of feeding, aggression or courtship behaviour.[3] The sounds emitted by fish can vary depending on the species and stimulus involved. They can produce either stridulatory sounds by moving components of the skeletal system, or can produce non-stridulatory sounds by manipulating specialized organs such as the swimbladder.[4]
|
12 |
+
|
13 |
+
Fish are abundant in most bodies of water. They can be found in nearly all aquatic environments, from high mountain streams (e.g., char and gudgeon) to the abyssal and even hadal depths of the deepest oceans (e.g., cusk-eels and snailfish), although no species has yet been documented in the deepest 25% of the ocean.[5] With 34,300 described species, fish exhibit greater species diversity than any other group of vertebrates.[6]
|
14 |
+
|
15 |
+
Fish are an important resource for humans worldwide, especially as food. Commercial and subsistence fishers hunt fish in wild fisheries (see fishing) or farm them in ponds or in cages in the ocean (see aquaculture). They are also caught by recreational fishers, kept as pets, raised by fishkeepers, and exhibited in public aquaria. Fish have had a role in culture through the ages, serving as deities, religious symbols, and as the subjects of art, books and movies.
|
16 |
+
|
17 |
+
Tetrapods emerged within lobe-finned fishes, so cladistically they are fish as well. However, traditionally fish are rendered paraphyletic by excluding the tetrapods (i.e., the amphibians, reptiles, birds and mammals which all descended from within the same ancestry). Because in this manner the term "fish" is defined negatively as a paraphyletic group, it is not considered a formal taxonomic grouping in systematic biology, unless it is used in the cladistic sense, including tetrapods.[7][8] The traditional term pisces (also ichthyes) is considered a typological, but not a phylogenetic classification.
|
18 |
+
|
19 |
+
The word for fish in English and the other Germanic languages (German fisch; Gothic fisks) is inherited from Proto-Germanic, and is related to the Latin piscis and Old Irish īasc, though the exact root is unknown; some authorities reconstruct an Proto-Indo-European root *peysk-, attested only in Italic, Celtic, and Germanic.[9][10][11][12]
|
20 |
+
|
21 |
+
Fish, as vertebrata, developed as sister of the tunicata. As the tetrapods emerged deep within the fishes group, as sister of the lungfish, characteristics of fish are typically shared by tetrapods, including having vertebrae and a cranium.
|
22 |
+
|
23 |
+
Early fish from the fossil record are represented by a group of small, jawless, armored fish known as ostracoderms. Jawless fish lineages are mostly extinct. An extant clade, the lampreys may approximate ancient pre-jawed fish. The first jaws are found in Placodermi fossils. They lacked distinct teeth, having instead the oral surfaces of their jaw plates modified to serve the various purposes of teeth. The diversity of jawed vertebrates may indicate the evolutionary advantage of a jawed mouth. It is unclear if the advantage of a hinged jaw is greater biting force, improved respiration, or a combination of factors.
|
24 |
+
|
25 |
+
Fish may have evolved from a creature similar to a coral-like sea squirt, whose larvae resemble primitive fish in important ways. The first ancestors of fish may have kept the larval form into adulthood (as some sea squirts do today), although perhaps the reverse is the case.
|
26 |
+
|
27 |
+
Fish are a paraphyletic group: that is, any clade containing all fish also contains the tetrapods, which are not fish. For this reason, groups such as the class Pisces seen in older reference works are no longer used in formal classifications.
|
28 |
+
|
29 |
+
Traditional classification divides fish into three extant classes, and with extinct forms sometimes classified within the tree, sometimes as their own classes:[14][15]
|
30 |
+
|
31 |
+
The above scheme is the one most commonly encountered in non-specialist and general works. Many of the above groups are paraphyletic, in that they have given rise to successive groups: Agnathans are ancestral to Chondrichthyes, who again have given rise to Acanthodiians, the ancestors of Osteichthyes. With the arrival of phylogenetic nomenclature, the fishes has been split up into a more detailed scheme, with the following major groups:
|
32 |
+
|
33 |
+
† – indicates extinct taxonSome palaeontologists contend that because Conodonta are chordates, they are primitive fish. For a fuller treatment of this taxonomy, see the vertebrate article.
|
34 |
+
|
35 |
+
The position of hagfish in the phylum Chordata is not settled. Phylogenetic research in 1998 and 1999 supported the idea that the hagfish and the lampreys form a natural group, the Cyclostomata, that is a sister group of the Gnathostomata.[16][17]
|
36 |
+
|
37 |
+
The various fish groups account for more than half of vertebrate species. There are almost 28,000 known extant species, of which almost 27,000 are bony fish, with 970 sharks, rays, and chimeras and about 108 hagfish and lampreys.[18] A third of these species fall within the nine largest families; from largest to smallest, these families are Cyprinidae, Gobiidae, Cichlidae, Characidae, Loricariidae, Balitoridae, Serranidae, Labridae, and Scorpaenidae. About 64 families are monotypic, containing only one species. The final total of extant species may grow to exceed 32,500.[19]
|
38 |
+
|
39 |
+
Agnatha (Pacific hagfish)
|
40 |
+
|
41 |
+
Chondrichthyes (Horn shark)
|
42 |
+
|
43 |
+
Actinopterygii (Brown trout)
|
44 |
+
|
45 |
+
Sarcopterygii (Coelacanth)
|
46 |
+
|
47 |
+
The term "fish" most precisely describes any non-tetrapod craniate (i.e. an animal with a skull and in most cases a backbone) that has gills throughout life and whose limbs, if any, are in the shape of fins.[21] Unlike groupings such as birds or mammals, fish are not a single clade but a paraphyletic collection of taxa, including hagfishes, lampreys, sharks and rays, ray-finned fish, coelacanths, and lungfish.[22][23] Indeed, lungfish and coelacanths are closer relatives of tetrapods (such as mammals, birds, amphibians, etc.) than of other fish such as ray-finned fish or sharks, so the last common ancestor of all fish is also an ancestor to tetrapods. As paraphyletic groups are no longer recognised in modern systematic biology, the use of the term "fish" as a biological group must be avoided.
|
48 |
+
|
49 |
+
Many types of aquatic animals commonly referred to as "fish" are not fish in the sense given above; examples include shellfish, cuttlefish, starfish, crayfish and jellyfish. In earlier times, even biologists did not make a distinction – sixteenth century natural historians classified also seals, whales, amphibians, crocodiles, even hippopotamuses, as well as a host of aquatic invertebrates, as fish.[24] However, according to the definition above, all mammals, including cetaceans like whales and dolphins, are not fish. In some contexts, especially in aquaculture, the true fish are referred to as finfish (or fin fish) to distinguish them from these other animals.
|
50 |
+
|
51 |
+
A typical fish is ectothermic, has a streamlined body for rapid swimming, extracts oxygen from water using gills or uses an accessory breathing organ to breathe atmospheric oxygen, has two sets of paired fins, usually one or two (rarely three) dorsal fins, an anal fin, and a tail fin, has jaws, has skin that is usually covered with scales, and lays eggs.
|
52 |
+
|
53 |
+
Each criterion has exceptions. Tuna, swordfish, and some species of sharks show some warm-blooded adaptations – they can heat their bodies significantly above ambient water temperature.[22] Streamlining and swimming performance varies from fish such as tuna, salmon, and jacks that can cover 10–20 body-lengths per second to species such as eels and rays that swim no more than 0.5 body-lengths per second.[25] Many groups of freshwater fish extract oxygen from the air as well as from the water using a variety of different structures. Lungfish have paired lungs similar to those of tetrapods, gouramis have a structure called the labyrinth organ that performs a similar function, while many catfish, such as Corydoras extract oxygen via the intestine or stomach.[26] Body shape and the arrangement of the fins is highly variable, covering such seemingly un-fishlike forms as seahorses, pufferfish, anglerfish, and gulpers. Similarly, the surface of the skin may be naked (as in moray eels), or covered with scales of a variety of different types usually defined as placoid (typical of sharks and rays), cosmoid (fossil lungfish and coelacanths), ganoid (various fossil fish but also living gars and bichirs), cycloid, and ctenoid (these last two are found on most bony fish).[27] There are even fish that live mostly on land or lay their eggs on land near water.[28] Mudskippers feed and interact with one another on mudflats and go underwater to hide in their burrows.[29] A single, undescribed species of Phreatobius, has been called a true "land fish" as this worm-like catfish strictly lives among waterlogged leaf litter.[30][31] Many species live in underground lakes, underground rivers or aquifers and are popularly known as cavefish.[32]
|
54 |
+
|
55 |
+
Fish range in size from the huge 16-metre (52 ft) whale shark to the tiny 8-millimetre (0.3 in) stout infantfish.
|
56 |
+
|
57 |
+
Fish species diversity is roughly divided equally between marine (oceanic) and freshwater ecosystems. Coral reefs in the Indo-Pacific constitute the center of diversity for marine fishes, whereas continental freshwater fishes are most diverse in large river basins of tropical rainforests, especially the Amazon, Congo, and Mekong basins. More than 5,600 fish species inhabit Neotropical freshwaters alone, such that Neotropical fishes represent about 10% of all vertebrate species on the Earth. Exceptionally rich sites in the Amazon basin, such as Cantão State Park, can contain more freshwater fish species than occur in all of Europe.[33]
|
58 |
+
|
59 |
+
Most fish exchange gases using gills on either side of the pharynx. Gills consist of threadlike structures called filaments. Each filament contains a capillary network that provides a large surface area for exchanging oxygen and carbon dioxide. Fish exchange gases by pulling oxygen-rich water through their mouths and pumping it over their gills. In some fish, capillary blood flows in the opposite direction to the water, causing countercurrent exchange. The gills push the oxygen-poor water out through openings in the sides of the pharynx. Some fish, like sharks and lampreys, possess multiple gill openings. However, bony fish have a single gill opening on each side. This opening is hidden beneath a protective bony cover called an operculum.
|
60 |
+
|
61 |
+
Juvenile bichirs have external gills, a very primitive feature that they share with larval amphibians.
|
62 |
+
|
63 |
+
Fish from multiple groups can live out of the water for extended periods. Amphibious fish such as the mudskipper can live and move about on land for up to several days,[dubious – discuss] or live in stagnant or otherwise oxygen depleted water. Many such fish can breathe air via a variety of mechanisms. The skin of anguillid eels may absorb oxygen directly. The buccal cavity of the electric eel may breathe air. Catfish of the families Loricariidae, Callichthyidae, and Scoloplacidae absorb air through their digestive tracts.[34] Lungfish, with the exception of the Australian lungfish, and bichirs have paired lungs similar to those of tetrapods and must surface to gulp fresh air through the mouth and pass spent air out through the gills. Gar and bowfin have a vascularized swim bladder that functions in the same way. Loaches, trahiras, and many catfish breathe by passing air through the gut. Mudskippers breathe by absorbing oxygen across the skin (similar to frogs). A number of fish have evolved so-called accessory breathing organs that extract oxygen from the air. Labyrinth fish (such as gouramis and bettas) have a labyrinth organ above the gills that performs this function. A few other fish have structures resembling labyrinth organs in form and function, most notably snakeheads, pikeheads, and the Clariidae catfish family.
|
64 |
+
|
65 |
+
Breathing air is primarily of use to fish that inhabit shallow, seasonally variable waters where the water's oxygen concentration may seasonally decline. Fish dependent solely on dissolved oxygen, such as perch and cichlids, quickly suffocate, while air-breathers survive for much longer, in some cases in water that is little more than wet mud. At the most extreme, some air-breathing fish are able to survive in damp burrows for weeks without water, entering a state of aestivation (summertime hibernation) until water returns.
|
66 |
+
|
67 |
+
Air breathing fish can be divided into obligate air breathers and facultative air breathers. Obligate air breathers, such as the African lungfish, must breathe air periodically or they suffocate. Facultative air breathers, such as the catfish Hypostomus plecostomus, only breathe air if they need to and will otherwise rely on their gills for oxygen. Most air breathing fish are facultative air breathers that avoid the energetic cost of rising to the surface and the fitness cost of exposure to surface predators.[34]
|
68 |
+
|
69 |
+
Fish have a closed-loop circulatory system. The heart pumps the blood in a single loop throughout the body. In most fish, the heart consists of four parts, including two chambers and an entrance and exit.[35] The first part is the sinus venosus, a thin-walled sac that collects blood from the fish's veins before allowing it to flow to the second part, the atrium, which is a large muscular chamber. The atrium serves as a one-way antechamber, sends blood to the third part, ventricle. The ventricle is another thick-walled, muscular chamber and it pumps the blood, first to the fourth part, bulbus arteriosus, a large tube, and then out of the heart. The bulbus arteriosus connects to the aorta, through which blood flows to the gills for oxygenation.
|
70 |
+
|
71 |
+
Jaws allow fish to eat a wide variety of food, including plants and other organisms. Fish ingest food through the mouth and break it down in the esophagus. In the stomach, food is further digested and, in many fish, processed in finger-shaped pouches called pyloric caeca, which secrete digestive enzymes and absorb nutrients. Organs such as the liver and pancreas add enzymes and various chemicals as the food moves through the digestive tract. The intestine completes the process of digestion and nutrient absorption.
|
72 |
+
|
73 |
+
As with many aquatic animals, most fish release their nitrogenous wastes as ammonia. Some of the wastes diffuse through the gills. Blood wastes are filtered by the kidneys.
|
74 |
+
|
75 |
+
Saltwater fish tend to lose water because of osmosis. Their kidneys return water to the body. The reverse happens in freshwater fish: they tend to gain water osmotically. Their kidneys produce dilute urine for excretion. Some fish have specially adapted kidneys that vary in function, allowing them to move from freshwater to saltwater.
|
76 |
+
|
77 |
+
The scales of fish originate from the mesoderm (skin); they may be similar in structure to teeth.
|
78 |
+
|
79 |
+
Fish typically have quite small brains relative to body size compared with other vertebrates, typically one-fifteenth the brain mass of a similarly sized bird or mammal.[36] However, some fish have relatively large brains, most notably mormyrids and sharks, which have brains about as massive relative to body weight as birds and marsupials.[37]
|
80 |
+
|
81 |
+
Fish brains are divided into several regions. At the front are the olfactory lobes, a pair of structures that receive and process signals from the nostrils via the two olfactory nerves.[36] The olfactory lobes are very large in fish that hunt primarily by smell, such as hagfish, sharks, and catfish. Behind the olfactory lobes is the two-lobed telencephalon, the structural equivalent to the cerebrum in higher vertebrates. In fish the telencephalon is concerned mostly with olfaction.[36] Together these structures form the forebrain.
|
82 |
+
|
83 |
+
Connecting the forebrain to the midbrain is the diencephalon (in the diagram, this structure is below the optic lobes and consequently not visible). The diencephalon performs functions associated with hormones and homeostasis.[36] The pineal body lies just above the diencephalon. This structure detects light, maintains circadian rhythms, and controls color changes.[36]
|
84 |
+
|
85 |
+
The midbrain (or mesencephalon) contains the two optic lobes. These are very large in species that hunt by sight, such as rainbow trout and cichlids.[36]
|
86 |
+
|
87 |
+
The hindbrain (or metencephalon) is particularly involved in swimming and balance.[36] The cerebellum is a single-lobed structure that is typically the biggest part of the brain.[36] Hagfish and lampreys have relatively small cerebellae, while the mormyrid cerebellum is massive and apparently involved in their electrical sense.[36]
|
88 |
+
|
89 |
+
The brain stem (or myelencephalon) is the brain's posterior.[36] As well as controlling some muscles and body organs, in bony fish at least, the brain stem governs respiration and osmoregulation.[36]
|
90 |
+
|
91 |
+
Most fish possess highly developed sense organs. Nearly all daylight fish have color vision that is at least as good as a human's (see vision in fishes). Many fish also have chemoreceptors that are responsible for extraordinary senses of taste and smell. Although they have ears, many fish may not hear very well. Most fish have sensitive receptors that form the lateral line system, which detects gentle currents and vibrations, and senses the motion of nearby fish and prey.[38] Some fish, such as catfish and sharks, have the ampullae of Lorenzini, electroreceptors that detect weak electric currents on the order of millivolt.[39] Other fish, like the South American electric fishes Gymnotiformes, can produce weak electric currents, which they use in navigation and social communication.
|
92 |
+
|
93 |
+
Fish orient themselves using landmarks and may use mental maps based on multiple landmarks or symbols. Fish behavior in mazes reveals that they possess spatial memory and visual discrimination.[40]
|
94 |
+
|
95 |
+
Vision is an important sensory system for most species of fish. Fish eyes are similar to those of terrestrial vertebrates like birds and mammals, but have a more spherical lens. Their retinas generally have both rods and cones (for scotopic and photopic vision), and most species have colour vision. Some fish can see ultraviolet and some can see polarized light. Amongst jawless fish, the lamprey has well-developed eyes, while the hagfish has only primitive eyespots.[41] Fish vision shows adaptation to their visual environment, for example deep sea fishes have eyes suited to the dark environment.
|
96 |
+
|
97 |
+
Hearing is an important sensory system for most species of fish. Fish sense sound using their lateral lines and their ears.
|
98 |
+
|
99 |
+
New research has expanded preconceptions about the cognitive capacities of fish. For example, manta rays have exhibited behavior linked to self-awareness in mirror test cases. Placed in front of a mirror, individual rays engaged in contingency testing, that is, repetitive behavior aiming to check whether their reflection's behavior mimics their body movement.[42]
|
100 |
+
|
101 |
+
Wrasses have also passed the mirror test in a 2018 scientific study.[43][44]
|
102 |
+
|
103 |
+
Cases of tool use have also been noticed, notably in the Choerodon family, in archerfish and Atlantic cod.[45]
|
104 |
+
|
105 |
+
Experiments done by William Tavolga provide evidence that fish have pain and fear responses. For instance, in Tavolga's experiments, toadfish grunted when electrically shocked and over time they came to grunt at the mere sight of an electrode.[46]
|
106 |
+
|
107 |
+
In 2003, Scottish scientists at the University of Edinburgh and the Roslin Institute concluded that rainbow trout exhibit behaviors often associated with pain in other animals. Bee venom and acetic acid injected into the lips resulted in fish rocking their bodies and rubbing their lips along the sides and floors of their tanks, which the researchers concluded were attempts to relieve pain, similar to what mammals would do.[47][48] Neurons fired in a pattern resembling human neuronal patterns.[48]
|
108 |
+
|
109 |
+
Professor James D. Rose of the University of Wyoming claimed the study was flawed since it did not provide proof that fish possess "conscious awareness, particularly a kind of awareness that is meaningfully like ours".[49] Rose argues that since fish brains are so different from human brains, fish are probably not conscious in the manner humans are, so that reactions similar to human reactions to pain instead have other causes. Rose had published a study a year earlier arguing that fish cannot feel pain because their brains lack a neocortex.[50] However, animal behaviorist Temple Grandin argues that fish could still have consciousness without a neocortex because "different species can use different brain structures and systems to handle the same functions."[48]
|
110 |
+
|
111 |
+
Animal welfare advocates raise concerns about the possible suffering of fish caused by angling. Some countries, such as Germany have banned specific types of fishing, and the British RSPCA now formally prosecutes individuals who are cruel to fish.[51]
|
112 |
+
|
113 |
+
In 2019, scientists have shown that members of the monogamous species Amatitlania siquia exhibit pessimistic behavior when they are prevented from being with their partner.[52]
|
114 |
+
|
115 |
+
Most fish move by alternately contracting paired sets of muscles on either side of the backbone. These contractions form S-shaped curves that move down the body. As each curve reaches the back fin, backward force is applied to the water, and in conjunction with the fins, moves the fish forward. The fish's fins function like an airplane's flaps. Fins also increase the tail's surface area, increasing speed. The streamlined body of the fish decreases the amount of friction from the water. Since body tissue is denser than water, fish must compensate for the difference or they will sink. Many bony fish have an internal organ called a swim bladder that adjusts their buoyancy through manipulation of gases.
|
116 |
+
|
117 |
+
Although most fish are exclusively ectothermic, there are exceptions. The only known bony fishes (infraclass Teleostei) that exhibit endothermy are in the suborder Scombroidei – which includes the billfishes, tunas, and the butterfly kingfish, a basal species of mackerel[53] – and also the opah. The opah, a lampriform, was demonstrated in 2015 to utilize "whole-body endothermy", generating heat with its swimming muscles to warm its body while countercurrent exchange (as in respiration) minimizes heat loss.[54] It is able to actively hunt prey such as squid and swim for long distances due to the ability to warm its entire body, including its heart,[55] which is a trait typically found in only mammals and birds (in the form of homeothermy). In the cartilaginous fishes (class Chondrichthyes), sharks of the families Lamnidae (porbeagle, mackerel, salmon, and great white sharks) and Alopiidae (thresher sharks) exhibit endothermy. The degree of endothermy varies from the billfishes, which warm only their eyes and brain, to the bluefin tuna and the porbeagle shark, which maintain body temperatures in excess of 20 °C (68 °F) above ambient water temperatures.[53]
|
118 |
+
|
119 |
+
Endothermy, though metabolically costly, is thought to provide advantages such as increased muscle strength, higher rates of central nervous system processing, and higher rates of digestion.
|
120 |
+
|
121 |
+
|
122 |
+
|
123 |
+
Fish reproductive organs include testicles and ovaries. In most species, gonads are paired organs of similar size, which can be partially or totally fused.[56] There may also be a range of secondary organs that increase reproductive fitness.
|
124 |
+
|
125 |
+
In terms of spermatogonia distribution, the structure of teleosts testes has two types: in the most common, spermatogonia occur all along the seminiferous tubules, while in atherinomorph fish they are confined to the distal portion of these structures. Fish can present cystic or semi-cystic spermatogenesis in relation to the release phase of germ cells in cysts to the seminiferous tubules lumen.[56]
|
126 |
+
|
127 |
+
Fish ovaries may be of three types: gymnovarian, secondary gymnovarian or cystovarian. In the first type, the oocytes are released directly into the coelomic cavity and then enter the ostium, then through the oviduct and are eliminated. Secondary gymnovarian ovaries shed ova into the coelom from which they go directly into the oviduct. In the third type, the oocytes are conveyed to the exterior through the oviduct.[57] Gymnovaries are the primitive condition found in lungfish, sturgeon, and bowfin. Cystovaries characterize most teleosts, where the ovary lumen has continuity with the oviduct.[56] Secondary gymnovaries are found in salmonids and a few other teleosts.
|
128 |
+
|
129 |
+
Oogonia development in teleosts fish varies according to the group, and the determination of oogenesis dynamics allows the understanding of maturation and fertilization processes. Changes in the nucleus, ooplasm, and the surrounding layers characterize the oocyte maturation process.[56]
|
130 |
+
|
131 |
+
Postovulatory follicles are structures formed after oocyte release; they do not have endocrine function, present a wide irregular lumen, and are rapidly reabsorbed in a process involving the apoptosis of follicular cells. A degenerative process called follicular atresia reabsorbs vitellogenic oocytes not spawned. This process can also occur, but less frequently, in oocytes in other development stages.[56]
|
132 |
+
|
133 |
+
Some fish, like the California sheephead, are hermaphrodites, having both testes and ovaries either at different phases in their life cycle or, as in hamlets, have them simultaneously.
|
134 |
+
|
135 |
+
Over 97% of all known fish are oviparous,[58] that is, the eggs develop outside the mother's body. Examples of oviparous fish include salmon, goldfish, cichlids, tuna, and eels. In the majority of these species, fertilisation takes place outside the mother's body, with the male and female fish shedding their gametes into the surrounding water. However, a few oviparous fish practice internal fertilization, with the male using some sort of intromittent organ to deliver sperm into the genital opening of the female, most notably the oviparous sharks, such as the horn shark, and oviparous rays, such as skates. In these cases, the male is equipped with a pair of modified pelvic fins known as claspers.
|
136 |
+
|
137 |
+
Marine fish can produce high numbers of eggs which are often released into the open water column. The eggs have an average diameter of 1 millimetre (0.04 in).
|
138 |
+
|
139 |
+
Egg of lamprey
|
140 |
+
|
141 |
+
Egg of catshark (mermaids' purse)
|
142 |
+
|
143 |
+
Egg of bullhead shark
|
144 |
+
|
145 |
+
Egg of chimaera
|
146 |
+
|
147 |
+
The newly hatched young of oviparous fish are called larvae. They are usually poorly formed, carry a large yolk sac (for nourishment), and are very different in appearance from juvenile and adult specimens. The larval period in oviparous fish is relatively short (usually only several weeks), and larvae rapidly grow and change appearance and structure (a process termed metamorphosis) to become juveniles. During this transition larvae must switch from their yolk sac to feeding on zooplankton prey, a process which depends on typically inadequate zooplankton density, starving many larvae.
|
148 |
+
|
149 |
+
In ovoviviparous fish the eggs develop inside the mother's body after internal fertilization but receive little or no nourishment directly from the mother, depending instead on the yolk. Each embryo develops in its own egg. Familiar examples of ovoviviparous fish include guppies, angel sharks, and coelacanths.
|
150 |
+
|
151 |
+
Some species of fish are viviparous. In such species the mother retains the eggs and nourishes the embryos. Typically, viviparous fish have a structure analogous to the placenta seen in mammals connecting the mother's blood supply with that of the embryo. Examples of viviparous fish include the surf-perches, splitfins, and lemon shark. Some viviparous fish exhibit oophagy, in which the developing embryos eat other eggs produced by the mother. This has been observed primarily among sharks, such as the shortfin mako and porbeagle, but is known for a few bony fish as well, such as the halfbeak Nomorhamphus ebrardtii.[59] Intrauterine cannibalism is an even more unusual mode of vivipary, in which the largest embryos eat weaker and smaller siblings. This behavior is also most commonly found among sharks, such as the grey nurse shark, but has also been reported for Nomorhamphus ebrardtii.[59]
|
152 |
+
|
153 |
+
Aquarists commonly refer to ovoviviparous and viviparous fish as livebearers.
|
154 |
+
|
155 |
+
Acoustic communication in fish involves the transmission of acoustic signals from one individual of a species to another. The production of sounds as a means of communication among fish is most often used in the context of feeding, aggression or courtship behaviour.[3]
|
156 |
+
The sounds emitted can vary depending on the species and stimulus involved. Fish can produce either stridulatory sounds by moving components of the skeletal system, or can produce non-stridulatory sounds by manipulating specialized organs such as the swimbladder.[4]
|
157 |
+
|
158 |
+
There are some species of fish that can produce sounds by rubbing or grinding their bones together. These noises produced by bone-on-bone interactions are known as 'stridulatory sounds'.[4]
|
159 |
+
|
160 |
+
An example of this is seen in Haemulon flavolineatum, a species commonly referred to as the 'French grunt fish', as it produces a grunting noise by grinding its teeth together.[4]
|
161 |
+
This behaviour is most pronounced when the H. flavolineatum is in distress situations.[4] The grunts produced by this species of fishes generate a frequency of approximately 700 Hz, and last approximately 47 milliseconds.[4] The H. flavolineatum does not emit sounds with frequencies greater than 1000 Hz, and does not detect sounds that have frequencies greater than 1050 Hz.[4]
|
162 |
+
|
163 |
+
In a study conducted by Oliveira et al. (2014), the longsnout seahorse, Hippocampus reidi, was recorded producing two different categories of sounds; ‘clicks’ and ‘growls’. The sounds emitted by the H. reidi are accomplished by rubbing their coronet bone across the grooved section of their neurocranium.[60]
|
164 |
+
‘Clicking’ sounds were found to be primarily produced during courtship and feeding, and the frequencies of clicks were within the range of 50 Hz-800 Hz.[61] The frequencies were noted to be on the higher end of the range during spawning periods, when the female and male fishes were less than fifteen centimeters apart.[61] Growl sounds were produced when the H. reidi encountered stressful situations, such as handling by researchers.[61] The ‘growl’ sounds consist of a series of sound pulses and are emitted simultaneously with body vibrations.[61]
|
165 |
+
|
166 |
+
Some fish species create noise by engaging specialized muscles that contract and cause swimbladder vibrations.
|
167 |
+
|
168 |
+
Oyster toadfish produce loud grunting sounds by contracting muscles located along the sides of their swim bladder, known as sonic muscles[62]
|
169 |
+
Female and male toadfishes emit short-duration grunts, often as a fright response.[63] In addition to short-duration grunts, male toadfishes produce “boat whistle calls”.[64] These calls are longer in duration, lower in frequency, and are primarily used to attract mates.[64]
|
170 |
+
The sounds emitted by the O. tao have frequency range of 140 Hz to 260 Hz.[64] The frequencies of the calls depend on the rate at which the sonic muscles contract.[65][62]
|
171 |
+
|
172 |
+
The red drum, Sciaenops ocellatus, produces drumming sounds by vibrating its swimbladder.[66] Vibrations are caused by the rapid contraction of sonic muscles that surround the dorsal aspect of the swimbladder.[66] These vibrations result in repeated sounds with frequencies that range from 100 to >200 Hz.[66] The S. Ocellatus can produce different calls depending on the stimuli involved.[66] The sounds created in courtship situations are different from those made during distressing events such as predatorial attacks.[66] Unlike the males of the S. Ocellatus species, the females of this species don't produce sounds and lack sound-producing (sonic) muscles.[66]
|
173 |
+
|
174 |
+
Like other animals, fish suffer from diseases and parasites. To prevent disease they have a variety of defenses. Non-specific defenses include the skin and scales, as well as the mucus layer secreted by the epidermis that traps and inhibits the growth of microorganisms. If pathogens breach these defenses, fish can develop an inflammatory response that increases blood flow to the infected region and delivers white blood cells that attempt to destroy pathogens. Specific defenses respond to particular pathogens recognised by the fish's body, i.e., an immune response.[67] In recent years, vaccines have become widely used in aquaculture and also with ornamental fish, for example furunculosis vaccines in farmed salmon and koi herpes virus in koi.[68][69]
|
175 |
+
|
176 |
+
Some species use cleaner fish to remove external parasites. The best known of these are the Bluestreak cleaner wrasses of the genus Labroides found on coral reefs in the Indian and Pacific oceans. These small fish maintain so-called "cleaning stations" where other fish congregate and perform specific movements to attract the attention of the cleaners.[70] Cleaning behaviors have been observed in a number of fish groups, including an interesting case between two cichlids of the same genus, Etroplus maculatus, the cleaner, and the much larger Etroplus suratensis.[71]
|
177 |
+
|
178 |
+
Immune organs vary by type of fish.[72]
|
179 |
+
In the jawless fish (lampreys and hagfish), true lymphoid organs are absent. These fish rely on regions of lymphoid tissue within other organs to produce immune cells. For example, erythrocytes, macrophages and plasma cells are produced in the anterior kidney (or pronephros) and some areas of the gut (where granulocytes mature.) They resemble primitive bone marrow in hagfish.
|
180 |
+
Cartilaginous fish (sharks and rays) have a more advanced immune system. They have three specialized organs that are unique to Chondrichthyes; the epigonal organs (lymphoid tissue similar to mammalian bone) that surround the gonads, the Leydig's organ within the walls of their esophagus, and a spiral valve in their intestine. These organs house typical immune cells (granulocytes, lymphocytes and plasma cells). They also possess an identifiable thymus and a well-developed spleen (their most important immune organ) where various lymphocytes, plasma cells and macrophages develop and are stored.
|
181 |
+
Chondrostean fish (sturgeons, paddlefish, and bichirs) possess a major site for the production of granulocytes within a mass that is associated with the meninges (membranes surrounding the central nervous system.) Their heart is frequently covered with tissue that contains lymphocytes, reticular cells and a small number of macrophages. The chondrostean kidney is an important hemopoietic organ; where erythrocytes, granulocytes, lymphocytes and macrophages develop.
|
182 |
+
|
183 |
+
Like chondrostean fish, the major immune tissues of bony fish (or teleostei) include the kidney (especially the anterior kidney), which houses many different immune cells.[73] In addition, teleost fish possess a thymus, spleen and scattered immune areas within mucosal tissues (e.g. in the skin, gills, gut and gonads). Much like the mammalian immune system, teleost erythrocytes, neutrophils and granulocytes are believed to reside in the spleen whereas lymphocytes are the major cell type found in the thymus.[74][75] In 2006, a lymphatic system similar to that in mammals was described in one species of teleost fish, the zebrafish. Although not confirmed as yet, this system presumably will be where naive (unstimulated) T cells accumulate while waiting to encounter an antigen.[76]
|
184 |
+
|
185 |
+
B and T lymphocytes bearing immunoglobulins and T cell receptors, respectively, are found in all jawed fishes. Indeed, the adaptive immune system as a whole evolved in an ancestor of all jawed vertebrate.[77]
|
186 |
+
|
187 |
+
The 2006 IUCN Red List names 1,173 fish species that are threatened with extinction.[78] Included are species such as Atlantic cod,[79] Devil's Hole pupfish,[80] coelacanths,[81] and great white sharks.[82] Because fish live underwater they are more difficult to study than terrestrial animals and plants, and information about fish populations is often lacking. However, freshwater fish seem particularly threatened because they often live in relatively small water bodies. For example, the Devil's Hole pupfish occupies only a single 3 by 6 metres (10 by 20 ft) pool.[83]
|
188 |
+
|
189 |
+
Overfishing is a major threat to edible fish such as cod and tuna.[84][85] Overfishing eventually causes population (known as stock) collapse because the survivors cannot produce enough young to replace those removed. Such commercial extinction does not mean that the species is extinct, merely that it can no longer sustain a fishery.
|
190 |
+
|
191 |
+
One well-studied example of fishery collapse is the Pacific sardine Sadinops sagax caerulues fishery off the California coast. From a 1937 peak of 790,000 long tons (800,000 t) the catch steadily declined to only 24,000 long tons (24,000 t) in 1968, after which the fishery was no longer economically viable.[86]
|
192 |
+
|
193 |
+
The main tension between fisheries science and the fishing industry is that the two groups have different views on the resiliency of fisheries to intensive fishing. In places such as Scotland, Newfoundland, and Alaska the fishing industry is a major employer, so governments are predisposed to support it.[87][88] On the other hand, scientists and conservationists push for stringent protection, warning that many stocks could be wiped out within fifty years.[89][90]
|
194 |
+
|
195 |
+
A key stress on both freshwater and marine ecosystems is habitat degradation including water pollution, the building of dams, removal of water for use by humans, and the introduction of exotic species.[91] An example of a fish that has become endangered because of habitat change is the pallid sturgeon, a North American freshwater fish that lives in rivers damaged by human activity.[92]
|
196 |
+
|
197 |
+
Introduction of non-native species has occurred in many habitats. One of the best studied examples is the introduction of Nile perch into Lake Victoria in the 1960s. Nile perch gradually exterminated the lake's 500 endemic cichlid species. Some of them survive now in captive breeding programmes, but others are probably extinct.[93] Carp, snakeheads,[94] tilapia, European perch, brown trout, rainbow trout, and sea lampreys are other examples of fish that have caused problems by being introduced into alien environments.
|
198 |
+
|
199 |
+
Throughout history, humans have utilized fish as a food source. Historically and today, most fish protein has come by means of catching wild fish. However, aquaculture, or fish farming, which has been practiced since about 3,500 BCE. in China,[95] is becoming increasingly important in many nations. Overall, about one-sixth of the world's protein is estimated to be provided by fish.[96] That proportion is considerably elevated in some developing nations and regions heavily dependent on the sea. In a similar manner, fish have been tied to trade.
|
200 |
+
|
201 |
+
Catching fish for the purpose of food or sport is known as fishing, while the organized effort by humans to catch fish is called a fishery. Fisheries are a huge global business and provide income for millions of people.[96] The annual yield from all fisheries worldwide is about 154 million tons,[97] with popular species including herring, cod, anchovy, tuna, flounder, and salmon. However, the term fishery is broadly applied, and includes more organisms than just fish, such as mollusks and crustaceans, which are often called "fish" when used as food.
|
202 |
+
|
203 |
+
Fish have been recognized as a source of beauty for almost as long as used for food, appearing in cave art, being raised as ornamental fish in ponds, and displayed in aquariums in homes, offices, or public settings.
|
204 |
+
|
205 |
+
Recreational fishing is fishing primarily for pleasure or competition; it can be contrasted with commercial fishing, which is fishing for profit, or subsistence fishing, which is fishing primarily for food. The most common form of recreational fishing is done with a rod, reel, line, hooks, and any one of a wide range of baits. Recreational fishing is particularly popular in North America and Europe and state, provincial, and federal government agencies actively management target fish species.[98][99] Angling is a method of fishing, specifically the practice of catching fish by means of an "angle" (hook). Anglers must select the right hook, cast accurately, and retrieve at the right speed while considering water and weather conditions, species, fish response, time of the day, and other factors.
|
206 |
+
|
207 |
+
Fish themes have symbolic significance in many religions. In ancient Mesopotamia, fish offerings were made to the gods from the very earliest times.[100] Fish were also a major symbol of Enki, the god of water.[100] Fish frequently appear as filling motifs in cylinder seals from the Old Babylonian (c. 1830 BC – c. 1531 BC) and Neo-Assyrian (911–609 BC) periods.[100] Starting during the Kassite Period (c. 1600 BC – c. 1155 BC) and lasting until the early Persian Period (550–30 BC), healers and exorcists dressed in ritual garb resembling the bodies of fish.[100] During the Seleucid Period (312–63 BC), the legendary Babylonian culture hero Oannes, described by Berossus, was said to have dressed in the skin of a fish.[100] Fish were sacred to the Syrian goddess Atargatis[101] and, during her festivals, only her priests were permitted to eat them.[101]
|
208 |
+
|
209 |
+
In the Book of Jonah, a work of Jewish literature probably written in the fourth century BC, the central figure, a prophet named Jonah, is swallowed by a giant fish after being thrown overboard by the crew of the ship he is travelling on.[103][104][105] The fish later vomits Jonah out on shore after three days.[103][104][105] This book was later included as part of the Hebrew Bible, or Christian Old Testament,[106][107] and a version of the story it contains is summarized in Surah 37:139-148 of the Quran.[108] Early Christians used the ichthys, a symbol of a fish, to represent Jesus,[101][102] because the Greek word for fish, ΙΧΘΥΣ Ichthys, could be used as an acronym for "Ίησοῦς Χριστός, Θεοῦ Υἱός, Σωτήρ" (Iesous Christos, Theou Huios, Soter), meaning "Jesus Christ, Son of God, Saviour".[101][102] The gospels also refer to "fishers of men"[109] and feeding the multitude. In the dhamma of Buddhism, the fish symbolize happiness as they have complete freedom of movement in the water. Often drawn in the form of carp which are regarded in the Orient as sacred on account of their elegant beauty, size and life-span.
|
210 |
+
|
211 |
+
Among the deities said to take the form of a fish are Ika-Roa of the Polynesians, Dagon of various ancient Semitic peoples, the shark-gods of Hawaiʻi and Matsya of the Hindus. The astrological symbol Pisces is based on a constellation of the same name, but there is also a second fish constellation in the night sky, Piscis Austrinus.[110]
|
212 |
+
|
213 |
+
Fish feature prominently in art and literature, in movies such as Finding Nemo and books such as The Old Man and the Sea. Large fish, particularly sharks, have frequently been the subject of horror movies and thrillers, most notably the novel Jaws, which spawned a series of films of the same name that in turn inspired similar films or parodies such as Shark Tale and Snakehead Terror. Piranhas are shown in a similar light to sharks in films such as Piranha; however, contrary to popular belief, the red-bellied piranha is actually a generally timid scavenger species that is unlikely to harm humans.[111] Legends of half-human, half-fish mermaids have featured in folklore, including the stories of Hans Christian Andersen.
|
214 |
+
|
215 |
+
Though often used interchangeably, in biology these words have different meanings. Fish is used as a singular noun, or as a plural to describe multiple individuals from a single species. Fishes is used to describe different species or species groups.[112][113][114] Thus a pond would be said to contain 120 fish if all were from a single species or 120 fishes if these included a mix of several species. The distinction is similar to that between people and peoples.
|
216 |
+
|
217 |
+
A random assemblage of fish merely using some localised resource such as food or nesting sites is known simply as an aggregation. When fish come together in an interactive, social grouping, then they may be forming either a shoal or a school depending on the degree of organisation. A shoal is a loosely organised group where each fish swims and forages independently but is attracted to other members of the group and adjusts its behaviour, such as swimming speed, so that it remains close to the other members of the group. Schools of fish are much more tightly organised, synchronising their swimming so that all fish move at the same speed and in the same direction. Shoaling and schooling behaviour is believed to provide a variety of advantages.[116]
|
218 |
+
|
219 |
+
Examples:
|
220 |
+
|
221 |
+
While the words "school" and "shoal" have different meanings within biology, the distinctions are often ignored by non-specialists who treat the words as synonyms. Thus speakers of British English commonly use "shoal" to describe any grouping of fish, and speakers of American English commonly use "school" just as loosely.[117]
|
222 |
+
|
en/4693.html.txt
ADDED
@@ -0,0 +1,90 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
Pokémon[a] (English: /ˈpoʊkɪˌmɒn, -ki-, -keɪ-/),[1][2][3] also known as Pocket Monsters[b] in Japan, is a Japanese media franchise managed by the Pokémon Company, a company founded and with shares divided between Nintendo, Game Freak, and Creatures.[4] The franchise copyright and Japanese trademark is shared by all three companies,[5] but Nintendo is the sole owner of the trademark in other countries.[6] The franchise was created by Satoshi Tajiri in 1995,[7] and is centered on fictional creatures called "Pokémon", which humans, known as Pokémon Trainers, catch and train to battle each other for sport. The English slogan for the franchise is "Gotta Catch 'Em All".[8][9] Works within the franchise are set in the Pokémon universe.
|
6 |
+
|
7 |
+
The franchise began as Pokémon Red and Green (later released outside of Japan as Pokémon Red and Blue), a pair of video games for the original Game Boy handheld system that were developed by Game Freak and published by Nintendo in February 1996. It soon became a media mix franchise adapted into various different media.[10] Pokémon has since become the highest-grossing media franchise of all time,[11][12][13] with $90 billion in total franchise revenue.[14][15] The original video game series is the second-best-selling video game franchise (behind Nintendo's Mario franchise)[16] with more than 346 million copies sold[17] and one billion mobile downloads,[18] and it spawned a hit anime television series that has become the most successful video game adaptation[19] with over 20 seasons and 1,000 episodes in 169 countries.[17] In addition, the Pokémon franchise includes the world's top-selling toy brand,[20] the top-selling trading card game[21] with over 28.8 billion cards sold,[17] an anime film series, a live-action film, books, manga comics, music, merchandise, and a theme park. The franchise is also represented in other Nintendo media, such as the Super Smash Bros. series.
|
8 |
+
|
9 |
+
In November 2005, 4Kids Entertainment, which had managed the non-game related licensing of Pokémon, announced that it had agreed not to renew the Pokémon representation agreement. The Pokémon Company International oversees all Pokémon licensing outside Asia.[22] In 2006, the franchise celebrated its tenth anniversary.[23] In 2016, the Pokémon Company celebrated Pokémon's 20th anniversary by airing an ad during Super Bowl 50 in January and issuing re-releases of the 1996 Game Boy games Pokémon Red, Green (only in Japan), and Blue, and the 1998 Game Boy Color game Pokémon Yellow for the Nintendo 3DS on February 26, 2016.[24][25] The mobile augmented reality game Pokémon Go was released in July 2016.[26] The first live-action film in the franchise, Pokémon Detective Pikachu, based on the 2018 Nintendo 3DS spinoff game Detective Pikachu, was released in 2019.[11] The most recently released games, Pokémon Sword and Shield, were released worldwide on the Nintendo Switch on November 15, 2019.[27]
|
10 |
+
|
11 |
+
The name Pokémon is the portmanteau of the Japanese brand Pocket Monsters.[28] The term "Pokémon", in addition to referring to the Pokémon franchise itself, also collectively refers to the 896 fictional species that have made appearances in Pokémon media as of the release of the eighth generation titles Pokémon Sword and Shield. "Pokémon" is identical in the singular and plural, as is each individual species name; it is grammatically correct to say "one Pokémon" and "many Pokémon", as well as "one Pikachu" and "many Pikachu".[29]
|
12 |
+
|
13 |
+
Pokémon executive director Satoshi Tajiri first thought of Pokémon, albeit with a different concept and name, around 1989, when the Game Boy was released. The concept of the Pokémon universe, in both the video games and the general fictional world of Pokémon, stems from the hobby of insect collecting, a popular pastime which Tajiri enjoyed as a child.[30] Players are designated as Pokémon Trainers and have three general goals: to complete the regional Pokédex by collecting all of the available Pokémon species found in the fictional region where a game takes place, to complete the national Pokédex by transferring Pokémon from other regions, and to train a team of powerful Pokémon from those they have caught to compete against teams owned by other Trainers so they may eventually win the Pokémon League and become the regional Champion. These themes of collecting, training, and battling are present in almost every version of the Pokémon franchise, including the video games, the anime and manga series, and the Pokémon Trading Card Game.
|
14 |
+
|
15 |
+
In most incarnations of the Pokémon universe, a Trainer who encounters a wild Pokémon is able to capture that Pokémon by throwing a specially designed, mass-producible spherical tool called a Poké Ball at it. If the Pokémon is unable to escape the confines of the Poké Ball, it is considered to be under the ownership of that Trainer. Afterwards, it will obey whatever commands it receives from its new Trainer, unless the Trainer demonstrates such a lack of experience that the Pokémon would rather act on its own accord. Trainers can send out any of their Pokémon to wage non-lethal battles against other Pokémon; if the opposing Pokémon is wild, the Trainer can capture that Pokémon with a Poké Ball, increasing their collection of creatures. In Pokémon Go, and in Pokémon: Let's Go, Pikachu! and Let's Go, Eevee!, wild Pokémon encountered by players can be caught in Poké Balls, but generally cannot be battled. Pokémon already owned by other Trainers cannot be captured, except under special circumstances in certain side games. If a Pokémon fully defeats an opponent in battle so that the opponent is knocked out ("faints"), the winning Pokémon gains experience points and may level up. Beginning with Pokémon X and Y, experience points are also gained from catching Pokémon in Poké Balls. When leveling up, the Pokémon's battling aptitude statistics ("stats", such as "Attack" and "Speed") increase. At certain levels, the Pokémon may also learn new moves, which are techniques used in battle. In addition, many species of Pokémon can undergo a form of metamorphosis and transform into a similar but stronger species of Pokémon, a process called evolution; this process occurs spontaneously under differing circumstances, and is itself a central theme of the series. Some species of Pokémon may undergo a maximum of two evolutionary transformations, while others may undergo only one, and others may not evolve at all. For example, the Pokémon Pichu may evolve into Pikachu, which in turn may evolve into Raichu, following which no further evolutions may occur. Pokémon X and Y introduced the concept of "Mega Evolution," by which certain fully evolved Pokémon may temporarily undergo an additional evolution into a stronger form for the purpose of battling; this evolution is considered a special case, and unlike other evolutionary stages, is reversible.
|
16 |
+
|
17 |
+
In the main series, each game's single-player mode requires the Trainer to raise a team of Pokémon to defeat many non-player character (NPC) Trainers and their Pokémon. Each game lays out a somewhat linear path through a specific region of the Pokémon world for the Trainer to journey through, completing events and battling opponents along the way (including foiling the plans of an 'evil' team of Pokémon Trainers who serve as antagonists to the player). Excluding Pokémon Sun and Moon and Pokémon Ultra Sun and Ultra Moon, the games feature eight powerful Trainers, referred to as Gym Leaders, that the Trainer must defeat in order to progress. As a reward, the Trainer receives a Gym Badge, and once all eight badges are collected, the Trainer is eligible to challenge the region's Pokémon League, where four talented trainers (referred to collectively as the "Elite Four") challenge the Trainer to four Pokémon battles in succession. If the trainer can overcome this gauntlet, they must challenge the Regional Champion, the master Trainer who had previously defeated the Elite Four. Any Trainer who wins this last battle becomes the new champion.
|
18 |
+
|
19 |
+
All of the licensed Pokémon properties overseen by the Pokémon Company International are divided roughly by generation. These generations are roughly chronological divisions by release; every several years, when a sequel to the 1996 role-playing video games Pokémon Red and Green is released that features new Pokémon, characters, and gameplay concepts, that sequel is considered the start of a new generation of the franchise. The main Pokémon video games and their spin-offs, the anime, manga, and trading card game are all updated with the new Pokémon properties each time a new generation begins.[32] Some Pokémon from the newer games appear in anime episodes or films months, or even years, before the game they were programmed for came out. The first generation began in Japan with Pokémon Red and Green on the Game Boy. As of 2020, there currently are eight generations of main series video games. The most recent games in the main series, Pokémon Sword and Shield, began the eighth and latest generation and were released worldwide for the Nintendo Switch on November 15, 2019.[33][34][35]
|
20 |
+
|
21 |
+
Pokémon, also known as Pokémon the Series to Western audiences since 2013, is an anime television series based on the Pokémon video game series. It was originally broadcast on TV Tokyo in 1997. To date, the anime has produced and aired over 1,000 episodes, divided into 7 series in Japan and 22 seasons internationally. It is one of the longest currently running anime series.[38]
|
22 |
+
|
23 |
+
The anime follows the quest of the main character, Ash Ketchum (known as Satoshi in Japan), a Pokémon Master in training, as he and a small group of friends travel around the world of Pokémon along with their Pokémon partners.[39]
|
24 |
+
|
25 |
+
Various children's books, collectively known as Pokémon Junior, are also based on the anime.[40]
|
26 |
+
|
27 |
+
A new seven part anime series called Pokémon: Twilight Wings began airing on YouTube in 2020.[41] The series was animated by Studio Colorido.[42]
|
28 |
+
|
29 |
+
To date, there have been 23 animated theatrical Pokémon films (one in the making for July 2020[43]), which have been directed by Kunihiko Yuyama and Tetsuo Yajima, and distributed in Japan by Toho since 1998. The pair of films, Pokémon the Movie: Black—Victini and Reshiram and White—Victini and Zekrom are considered together as one film. Collectibles, such as promotional trading cards, have been available with some of the films. Since the 20th film, the films have been set in an alternate continuity separate from the anime series.
|
30 |
+
|
31 |
+
List of Pokémon animated theatrical films
|
32 |
+
|
33 |
+
A reboot to the film franchise began with the release of the 20th movie, Pokémon the Movie: I Choose You!, in Japan on July 15, 2017. It was followed by a continuation, Pokémon the Movie: The Power of Us, which was released in Japan on July 13, 2018.
|
34 |
+
|
35 |
+
A live-action Pokémon film directed by Rob Letterman, produced by Legendary Entertainment,[48] and distributed in Japan by Toho and internationally by Warner Bros.[49] began filming in January 2018.[50] On August 24, the film's official title was announced as Pokémon Detective Pikachu.[51] It was released on May 10, 2019.[11] The film is based on the 2018 Nintendo 3DS spin-off video game Detective Pikachu. Development of a sequel was announced in January 2019, before the release of the first film.[52]
|
36 |
+
|
37 |
+
Pokémon CDs have been released in North America, some of them in conjunction with the theatrical releases of the first three and the 20th Pokémon films. These releases were commonplace until late 2001. On March 27, 2007, a tenth anniversary CD was released containing 18 tracks from the English dub; this was the first English-language release in over five years. Soundtracks of the Pokémon feature films have been released in Japan each year in conjunction with the theatrical releases. In 2017, a soundtrack album featuring music from the North American versions of the 17th through 20th movies was released.
|
38 |
+
|
39 |
+
^ The exact date of release is unknown.
|
40 |
+
|
41 |
+
^ Featuring music from Pokémon the Movie: Diancie and the Cocoon of Destruction, Pokémon the Movie: Hoopa and the Clash of Ages, Pokémon the Movie: Volcanion and the Mechanical Marvel, and Pokémon the Movie: I Choose You!
|
42 |
+
|
43 |
+
The Pokémon Trading Card Game (TCG) is a collectible card game with a goal similar to a Pokémon battle in the video game series. Players use Pokémon cards, with individual strengths and weaknesses, in an attempt to defeat their opponent by "knocking out" their Pokémon cards.[55] The game was published in North America by Wizards of the Coast in 1999.[56] With the release of the Game Boy Advance video games Pokémon Ruby and Sapphire, the Pokémon Company took back the card game from Wizards of the Coast and started publishing the cards themselves.[56] The Expedition expansion introduced the Pokémon-e Trading Card Game, where the cards (for the most part) were compatible with the Nintendo e-Reader. Nintendo discontinued its production of e-Reader compatible cards with the release of FireRed and LeafGreen. In 1998, Nintendo released a Game Boy Color version of the trading card game in Japan; Pokémon Trading Card Game was subsequently released to the US and Europe in 2000. The game included digital versions cards from the original set of cards and the first two expansions (Jungle and Fossil), as well as several cards exclusive to the game. A sequel was released in Japan in 2001.[57]
|
44 |
+
|
45 |
+
There are various Pokémon manga series, four of which were released in English by Viz Media, and seven of them released in English by Chuang Yi. The manga series vary from game-based series to being based on the anime and the Trading Card Game. Original stories have also been published. As there are several series created by different authors, most Pokémon manga series differ greatly from each other and other media, such as the anime.[example needed] Pokémon Pocket Monsters and Pokémon Adventures are the two manga in production since the first generation.
|
46 |
+
|
47 |
+
A Pokémon-styled Monopoly board game was released in August 2014.[72]
|
48 |
+
|
49 |
+
Pokémon has been criticized by some fundamentalist Christians over perceived occult and violent themes and the concept of "Pokémon evolution", which they feel goes against the Biblical creation account in Genesis.[73] Sat2000, a satellite television station based in Vatican City, has countered that the Pokémon Trading Card Game and video games are "full of inventive imagination" and have no "harmful moral side effects".[74][75] In the United Kingdom, the "Christian Power Cards" game was introduced in 1999 by David Tate who stated, "Some people aren't happy with Pokémon and want an alternative, others just want Christian games." The game was similar to the Pokémon Trading Card Game but used Biblical figures.[76]
|
50 |
+
|
51 |
+
In 1999, Nintendo stopped manufacturing the Japanese version of the "Koga's Ninja Trick" trading card because it depicted a manji, a traditionally Buddhist symbol with no negative connotations. The Jewish civil rights group Anti-Defamation League complained because the symbol is the reverse of a swastika, a Nazi symbol. The cards were intended for sale in Japan only, but the popularity of Pokémon led to import into the United States with approval from Nintendo. The Anti-Defamation League understood that the portrayed symbol was not intended to offend and acknowledged the sensitivity that Nintendo showed by removing the product.[77][78]
|
52 |
+
|
53 |
+
In 1999, two nine-year-old boys from Merrick, New York sued Nintendo because they claimed the Pokémon Trading Card Game caused their problematic gambling.[79]
|
54 |
+
|
55 |
+
In 2001, Saudi Arabia banned Pokémon games and the trading cards, alleging that the franchise promoted Zionism by displaying the Star of David in the trading cards (a six-pointed star is featured in the card game) as well as other religious symbols such as crosses they associated with Christianity and triangles they associated with Freemasonry; the games also involved gambling, which is in violation of Muslim doctrine.[80][81]
|
56 |
+
|
57 |
+
Pokémon has also been accused of promoting materialism.[82]
|
58 |
+
|
59 |
+
In 2012, PETA criticized the concept of Pokémon as supporting cruelty to animals. PETA compared the game's concept, of capturing animals and forcing them to fight, to cockfights, dog fighting rings and circuses, events frequently criticized for cruelty to animals. PETA released a game spoofing Pokémon where the Pokémon battle their trainers to win their freedom.[83] PETA reaffirmed their objections in 2016 with the release of Pokémon Go, promoting the hashtag #GottaFreeThemAll.[84]
|
60 |
+
|
61 |
+
On December 16, 1997, more than 635 Japanese children were admitted to hospitals with epileptic seizures.[85] It was determined the seizures were caused by watching an episode of Pokémon "Dennō Senshi Porygon", (most commonly translated "Electric Soldier Porygon", season 1, episode 38); as a result, this episode has not been aired since. In this particular episode, there were bright explosions with rapidly alternating blue and red color patterns.[86] It was determined in subsequent research that these strobing light effects cause some individuals to have epileptic seizures, even if the person had no previous history of epilepsy.[87] This incident is a common focus of Pokémon-related parodies in other media, and was lampooned by The Simpsons episode "Thirty Minutes over Tokyo"[88] and the South Park episode "Chinpokomon",[89] among others.
|
62 |
+
|
63 |
+
In March 2000, Morrison Entertainment Group, a toy developer based at Manhattan Beach, California, sued Nintendo over claims that Pokémon infringed on its own Monster in My Pocket characters. A judge ruled there was no infringement and Morrison appealed the ruling. On February 4, 2003, the U.S. Court of Appeals for the Ninth Circuit affirmed the decision by the District Court to dismiss the suit.[90]
|
64 |
+
|
65 |
+
Within its first two days of release, Pokémon Go raised safety concerns among players. Multiple people also suffered minor injuries from falling while playing the game due to being distracted.[91]
|
66 |
+
|
67 |
+
Multiple police departments in various countries have issued warnings, some tongue-in-cheek, regarding inattentive driving, trespassing, and being targeted by criminals due to being unaware of one's surroundings.[92][93] People have suffered various injuries from accidents related to the game,[94][95][96][97] and Bosnian players have been warned to stay out of minefields left over from the 1990s Bosnian War.[98] On July 20, 2016, it was reported that an 18-year-old boy in Chiquimula, Guatemala was shot and killed while playing the game in the late evening hours. This was the first reported death in connection with the app. The boy's 17-year-old cousin, who was accompanying the victim, was shot in the foot. Police speculated that the shooters used the game's GPS capability to find the two.[99]
|
68 |
+
|
69 |
+
Pokémon, being a globally popular franchise, has left a significant mark on today's popular culture. The various species of Pokémon have become pop culture icons; examples include two different Pikachu balloons in the Macy's Thanksgiving Day Parade, Pokémon-themed airplanes operated by All Nippon Airways, merchandise items, and a traveling theme park that was in Nagoya, Japan in 2005 and in Taipei in 2006. Pokémon also appeared on the cover of the U.S. magazine Time in 1999.[100] The Comedy Central show Drawn Together has a character named Ling-Ling who is a parody of Pikachu.[101] Several other shows such as The Simpsons,[102] South Park[103] and Robot Chicken[104] have made references and spoofs of Pokémon, among other series. Pokémon was featured on VH1's I Love the '90s: Part Deux. A live action show based on the anime called Pokémon Live! toured the United States in late 2000.[105] Jim Butcher cites Pokémon as one of the inspirations for the Codex Alera series of novels.[106]
|
70 |
+
|
71 |
+
Pokémon has even made its mark in the realm of science. This includes animals named after Pokémon, such as Stentorceps weedlei (named after the Pokémon Weedle for its resemblance) and Chilicola Charizard Monckton (named after the Pokémon Charizard).[107] There is also a protein named after Pikachu, called Pikachurin.
|
72 |
+
|
73 |
+
In November 2001, Nintendo opened a store called the Pokémon Center in New York, in Rockefeller Center,[108] modeled after the two other Pokémon Center stores in Tokyo and Osaka and named after a staple of the video game series. Pokémon Centers are fictional buildings where Trainers take their injured Pokémon to be healed after combat.[109] The store sold Pokémon merchandise on a total of two floors, with items ranging from collectible shirts to stuffed Pokémon plushies.[110] The store also featured a Pokémon Distributing Machine in which players would place their game to receive an egg of a Pokémon that was being given out at that time. The store also had tables that were open for players of the Pokémon Trading Card Game to duel each other or an employee. The store was closed and replaced by the Nintendo World Store on May 14, 2005.[111] Four Pokémon Center kiosks were put in malls in the Seattle area.[112] The Pokémon Center online store was relaunched on August 6, 2014.[113]
|
74 |
+
|
75 |
+
Professor of Education Joseph Tobin theorizes that the success of the franchise was due to the long list of names that could be learned by children and repeated in their peer groups. Its rich fictional universe provides opportunities for discussion and demonstration of knowledge in front of their peers. The names of the creatures were linked to its characteristics, which converged with the children's belief that names have symbolic power. Children can pick their favourite Pokémon and affirm their individuality while at the same time affirming their conformance to the values of the group, and they can distinguish themselves from others by asserting what they liked and what they did not like from every chapter. Pokémon gained popularity because it provides a sense of identity to a wide variety of children, and lost it quickly when many of those children found that the identity groups were too big and searched for identities that would distinguish them into smaller groups.[114]
|
76 |
+
|
77 |
+
Pokémon's history has been marked at times by rivalry with the Digimon media franchise that debuted at a similar time. Described as "the other 'mon'" by IGN's Juan Castro, Digimon has not enjoyed Pokémon's level of international popularity or success, but has maintained a dedicated fanbase.[115] IGN's Lucas M. Thomas stated that Pokémon is Digimon's "constant competition and comparison", attributing the former's relative success to the simplicity of its evolution mechanic as opposed to Digivolution.[116] The two have been noted for conceptual and stylistic similarities by sources such as GameZone.[117] A debate among fans exists over which of the two franchises came first.[118] In actuality, the first Pokémon media, Pokémon Red and Green, were released initially on February 27, 1996;[119] whereas the Digimon virtual pet was released on June 26, 1997.
|
78 |
+
|
79 |
+
While Pokémon's target demographic is children, early purchasers of Pokémon Omega Ruby and Alpha Sapphire were in their 20s.[120] Many fans are adults who originally played the games as children and had later returned to the series.[121]
|
80 |
+
|
81 |
+
Numerous fan sites exist for the Pokémon franchise, including Bulbapedia, a wiki-based encyclopedia,[122][123][124] and Serebii,[125] a news and reference website.[126] Other large fan communities exist on other platforms, such as the r/pokemon subreddit with over 2.2 million subscribers.[127]
|
82 |
+
|
83 |
+
A significant community around the Pokémon video games' metagame has existed for a long time, analyzing the best ways to use each Pokémon to their full potential in competitive battles. The most prolific competitive community is Smogon University, which has created a widely accepted tier-based battle system.[128]
|
84 |
+
Smogon is affiliated with an online Pokémon game called Pokémon Showdown, in which players create a team and battle against other players around the world using the competitive tiers created by Smogon.[129]
|
85 |
+
|
86 |
+
In early 2014, an anonymous video streamer on Twitch launched Twitch Plays Pokémon, an experiment trying to crowdsource playing subsequent Pokémon games, starting with Pokémon Red.[130][131]
|
87 |
+
|
88 |
+
A challenge called the Nuzlocke Challenge allows players to only capture the first Pokémon encountered in each area. If they do not succeed in capturing that Pokémon, there are no second chances. When a Pokémon faints, it is considered "dead" and must be released or stored in the PC permanently.[132] If the player faints, the game is considered over, and the player must restart.[133] The original idea consisted of 2 to 3 rules that the community has built upon. There are many fan made Pokémon games that contain a game mode similar to the Nuzlocke Challenge, such as Pokémon Uranium.[134]
|
89 |
+
|
90 |
+
A study at Stanford Neurosciences published in Nature performed magnetic resonance imaging scans of 11 Pokémon experts and 11 controls, finding that seeing Pokémon stimulated activity in the visual cortex, in a different place than is triggered by recognizing faces, places or words, demonstrating the brain's ability to create such specialized areas.[135]
|
en/4694.html.txt
ADDED
@@ -0,0 +1,90 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
Pokémon[a] (English: /ˈpoʊkɪˌmɒn, -ki-, -keɪ-/),[1][2][3] also known as Pocket Monsters[b] in Japan, is a Japanese media franchise managed by the Pokémon Company, a company founded and with shares divided between Nintendo, Game Freak, and Creatures.[4] The franchise copyright and Japanese trademark is shared by all three companies,[5] but Nintendo is the sole owner of the trademark in other countries.[6] The franchise was created by Satoshi Tajiri in 1995,[7] and is centered on fictional creatures called "Pokémon", which humans, known as Pokémon Trainers, catch and train to battle each other for sport. The English slogan for the franchise is "Gotta Catch 'Em All".[8][9] Works within the franchise are set in the Pokémon universe.
|
6 |
+
|
7 |
+
The franchise began as Pokémon Red and Green (later released outside of Japan as Pokémon Red and Blue), a pair of video games for the original Game Boy handheld system that were developed by Game Freak and published by Nintendo in February 1996. It soon became a media mix franchise adapted into various different media.[10] Pokémon has since become the highest-grossing media franchise of all time,[11][12][13] with $90 billion in total franchise revenue.[14][15] The original video game series is the second-best-selling video game franchise (behind Nintendo's Mario franchise)[16] with more than 346 million copies sold[17] and one billion mobile downloads,[18] and it spawned a hit anime television series that has become the most successful video game adaptation[19] with over 20 seasons and 1,000 episodes in 169 countries.[17] In addition, the Pokémon franchise includes the world's top-selling toy brand,[20] the top-selling trading card game[21] with over 28.8 billion cards sold,[17] an anime film series, a live-action film, books, manga comics, music, merchandise, and a theme park. The franchise is also represented in other Nintendo media, such as the Super Smash Bros. series.
|
8 |
+
|
9 |
+
In November 2005, 4Kids Entertainment, which had managed the non-game related licensing of Pokémon, announced that it had agreed not to renew the Pokémon representation agreement. The Pokémon Company International oversees all Pokémon licensing outside Asia.[22] In 2006, the franchise celebrated its tenth anniversary.[23] In 2016, the Pokémon Company celebrated Pokémon's 20th anniversary by airing an ad during Super Bowl 50 in January and issuing re-releases of the 1996 Game Boy games Pokémon Red, Green (only in Japan), and Blue, and the 1998 Game Boy Color game Pokémon Yellow for the Nintendo 3DS on February 26, 2016.[24][25] The mobile augmented reality game Pokémon Go was released in July 2016.[26] The first live-action film in the franchise, Pokémon Detective Pikachu, based on the 2018 Nintendo 3DS spinoff game Detective Pikachu, was released in 2019.[11] The most recently released games, Pokémon Sword and Shield, were released worldwide on the Nintendo Switch on November 15, 2019.[27]
|
10 |
+
|
11 |
+
The name Pokémon is the portmanteau of the Japanese brand Pocket Monsters.[28] The term "Pokémon", in addition to referring to the Pokémon franchise itself, also collectively refers to the 896 fictional species that have made appearances in Pokémon media as of the release of the eighth generation titles Pokémon Sword and Shield. "Pokémon" is identical in the singular and plural, as is each individual species name; it is grammatically correct to say "one Pokémon" and "many Pokémon", as well as "one Pikachu" and "many Pikachu".[29]
|
12 |
+
|
13 |
+
Pokémon executive director Satoshi Tajiri first thought of Pokémon, albeit with a different concept and name, around 1989, when the Game Boy was released. The concept of the Pokémon universe, in both the video games and the general fictional world of Pokémon, stems from the hobby of insect collecting, a popular pastime which Tajiri enjoyed as a child.[30] Players are designated as Pokémon Trainers and have three general goals: to complete the regional Pokédex by collecting all of the available Pokémon species found in the fictional region where a game takes place, to complete the national Pokédex by transferring Pokémon from other regions, and to train a team of powerful Pokémon from those they have caught to compete against teams owned by other Trainers so they may eventually win the Pokémon League and become the regional Champion. These themes of collecting, training, and battling are present in almost every version of the Pokémon franchise, including the video games, the anime and manga series, and the Pokémon Trading Card Game.
|
14 |
+
|
15 |
+
In most incarnations of the Pokémon universe, a Trainer who encounters a wild Pokémon is able to capture that Pokémon by throwing a specially designed, mass-producible spherical tool called a Poké Ball at it. If the Pokémon is unable to escape the confines of the Poké Ball, it is considered to be under the ownership of that Trainer. Afterwards, it will obey whatever commands it receives from its new Trainer, unless the Trainer demonstrates such a lack of experience that the Pokémon would rather act on its own accord. Trainers can send out any of their Pokémon to wage non-lethal battles against other Pokémon; if the opposing Pokémon is wild, the Trainer can capture that Pokémon with a Poké Ball, increasing their collection of creatures. In Pokémon Go, and in Pokémon: Let's Go, Pikachu! and Let's Go, Eevee!, wild Pokémon encountered by players can be caught in Poké Balls, but generally cannot be battled. Pokémon already owned by other Trainers cannot be captured, except under special circumstances in certain side games. If a Pokémon fully defeats an opponent in battle so that the opponent is knocked out ("faints"), the winning Pokémon gains experience points and may level up. Beginning with Pokémon X and Y, experience points are also gained from catching Pokémon in Poké Balls. When leveling up, the Pokémon's battling aptitude statistics ("stats", such as "Attack" and "Speed") increase. At certain levels, the Pokémon may also learn new moves, which are techniques used in battle. In addition, many species of Pokémon can undergo a form of metamorphosis and transform into a similar but stronger species of Pokémon, a process called evolution; this process occurs spontaneously under differing circumstances, and is itself a central theme of the series. Some species of Pokémon may undergo a maximum of two evolutionary transformations, while others may undergo only one, and others may not evolve at all. For example, the Pokémon Pichu may evolve into Pikachu, which in turn may evolve into Raichu, following which no further evolutions may occur. Pokémon X and Y introduced the concept of "Mega Evolution," by which certain fully evolved Pokémon may temporarily undergo an additional evolution into a stronger form for the purpose of battling; this evolution is considered a special case, and unlike other evolutionary stages, is reversible.
|
16 |
+
|
17 |
+
In the main series, each game's single-player mode requires the Trainer to raise a team of Pokémon to defeat many non-player character (NPC) Trainers and their Pokémon. Each game lays out a somewhat linear path through a specific region of the Pokémon world for the Trainer to journey through, completing events and battling opponents along the way (including foiling the plans of an 'evil' team of Pokémon Trainers who serve as antagonists to the player). Excluding Pokémon Sun and Moon and Pokémon Ultra Sun and Ultra Moon, the games feature eight powerful Trainers, referred to as Gym Leaders, that the Trainer must defeat in order to progress. As a reward, the Trainer receives a Gym Badge, and once all eight badges are collected, the Trainer is eligible to challenge the region's Pokémon League, where four talented trainers (referred to collectively as the "Elite Four") challenge the Trainer to four Pokémon battles in succession. If the trainer can overcome this gauntlet, they must challenge the Regional Champion, the master Trainer who had previously defeated the Elite Four. Any Trainer who wins this last battle becomes the new champion.
|
18 |
+
|
19 |
+
All of the licensed Pokémon properties overseen by the Pokémon Company International are divided roughly by generation. These generations are roughly chronological divisions by release; every several years, when a sequel to the 1996 role-playing video games Pokémon Red and Green is released that features new Pokémon, characters, and gameplay concepts, that sequel is considered the start of a new generation of the franchise. The main Pokémon video games and their spin-offs, the anime, manga, and trading card game are all updated with the new Pokémon properties each time a new generation begins.[32] Some Pokémon from the newer games appear in anime episodes or films months, or even years, before the game they were programmed for came out. The first generation began in Japan with Pokémon Red and Green on the Game Boy. As of 2020, there currently are eight generations of main series video games. The most recent games in the main series, Pokémon Sword and Shield, began the eighth and latest generation and were released worldwide for the Nintendo Switch on November 15, 2019.[33][34][35]
|
20 |
+
|
21 |
+
Pokémon, also known as Pokémon the Series to Western audiences since 2013, is an anime television series based on the Pokémon video game series. It was originally broadcast on TV Tokyo in 1997. To date, the anime has produced and aired over 1,000 episodes, divided into 7 series in Japan and 22 seasons internationally. It is one of the longest currently running anime series.[38]
|
22 |
+
|
23 |
+
The anime follows the quest of the main character, Ash Ketchum (known as Satoshi in Japan), a Pokémon Master in training, as he and a small group of friends travel around the world of Pokémon along with their Pokémon partners.[39]
|
24 |
+
|
25 |
+
Various children's books, collectively known as Pokémon Junior, are also based on the anime.[40]
|
26 |
+
|
27 |
+
A new seven part anime series called Pokémon: Twilight Wings began airing on YouTube in 2020.[41] The series was animated by Studio Colorido.[42]
|
28 |
+
|
29 |
+
To date, there have been 23 animated theatrical Pokémon films (one in the making for July 2020[43]), which have been directed by Kunihiko Yuyama and Tetsuo Yajima, and distributed in Japan by Toho since 1998. The pair of films, Pokémon the Movie: Black—Victini and Reshiram and White—Victini and Zekrom are considered together as one film. Collectibles, such as promotional trading cards, have been available with some of the films. Since the 20th film, the films have been set in an alternate continuity separate from the anime series.
|
30 |
+
|
31 |
+
List of Pokémon animated theatrical films
|
32 |
+
|
33 |
+
A reboot to the film franchise began with the release of the 20th movie, Pokémon the Movie: I Choose You!, in Japan on July 15, 2017. It was followed by a continuation, Pokémon the Movie: The Power of Us, which was released in Japan on July 13, 2018.
|
34 |
+
|
35 |
+
A live-action Pokémon film directed by Rob Letterman, produced by Legendary Entertainment,[48] and distributed in Japan by Toho and internationally by Warner Bros.[49] began filming in January 2018.[50] On August 24, the film's official title was announced as Pokémon Detective Pikachu.[51] It was released on May 10, 2019.[11] The film is based on the 2018 Nintendo 3DS spin-off video game Detective Pikachu. Development of a sequel was announced in January 2019, before the release of the first film.[52]
|
36 |
+
|
37 |
+
Pokémon CDs have been released in North America, some of them in conjunction with the theatrical releases of the first three and the 20th Pokémon films. These releases were commonplace until late 2001. On March 27, 2007, a tenth anniversary CD was released containing 18 tracks from the English dub; this was the first English-language release in over five years. Soundtracks of the Pokémon feature films have been released in Japan each year in conjunction with the theatrical releases. In 2017, a soundtrack album featuring music from the North American versions of the 17th through 20th movies was released.
|
38 |
+
|
39 |
+
^ The exact date of release is unknown.
|
40 |
+
|
41 |
+
^ Featuring music from Pokémon the Movie: Diancie and the Cocoon of Destruction, Pokémon the Movie: Hoopa and the Clash of Ages, Pokémon the Movie: Volcanion and the Mechanical Marvel, and Pokémon the Movie: I Choose You!
|
42 |
+
|
43 |
+
The Pokémon Trading Card Game (TCG) is a collectible card game with a goal similar to a Pokémon battle in the video game series. Players use Pokémon cards, with individual strengths and weaknesses, in an attempt to defeat their opponent by "knocking out" their Pokémon cards.[55] The game was published in North America by Wizards of the Coast in 1999.[56] With the release of the Game Boy Advance video games Pokémon Ruby and Sapphire, the Pokémon Company took back the card game from Wizards of the Coast and started publishing the cards themselves.[56] The Expedition expansion introduced the Pokémon-e Trading Card Game, where the cards (for the most part) were compatible with the Nintendo e-Reader. Nintendo discontinued its production of e-Reader compatible cards with the release of FireRed and LeafGreen. In 1998, Nintendo released a Game Boy Color version of the trading card game in Japan; Pokémon Trading Card Game was subsequently released to the US and Europe in 2000. The game included digital versions cards from the original set of cards and the first two expansions (Jungle and Fossil), as well as several cards exclusive to the game. A sequel was released in Japan in 2001.[57]
|
44 |
+
|
45 |
+
There are various Pokémon manga series, four of which were released in English by Viz Media, and seven of them released in English by Chuang Yi. The manga series vary from game-based series to being based on the anime and the Trading Card Game. Original stories have also been published. As there are several series created by different authors, most Pokémon manga series differ greatly from each other and other media, such as the anime.[example needed] Pokémon Pocket Monsters and Pokémon Adventures are the two manga in production since the first generation.
|
46 |
+
|
47 |
+
A Pokémon-styled Monopoly board game was released in August 2014.[72]
|
48 |
+
|
49 |
+
Pokémon has been criticized by some fundamentalist Christians over perceived occult and violent themes and the concept of "Pokémon evolution", which they feel goes against the Biblical creation account in Genesis.[73] Sat2000, a satellite television station based in Vatican City, has countered that the Pokémon Trading Card Game and video games are "full of inventive imagination" and have no "harmful moral side effects".[74][75] In the United Kingdom, the "Christian Power Cards" game was introduced in 1999 by David Tate who stated, "Some people aren't happy with Pokémon and want an alternative, others just want Christian games." The game was similar to the Pokémon Trading Card Game but used Biblical figures.[76]
|
50 |
+
|
51 |
+
In 1999, Nintendo stopped manufacturing the Japanese version of the "Koga's Ninja Trick" trading card because it depicted a manji, a traditionally Buddhist symbol with no negative connotations. The Jewish civil rights group Anti-Defamation League complained because the symbol is the reverse of a swastika, a Nazi symbol. The cards were intended for sale in Japan only, but the popularity of Pokémon led to import into the United States with approval from Nintendo. The Anti-Defamation League understood that the portrayed symbol was not intended to offend and acknowledged the sensitivity that Nintendo showed by removing the product.[77][78]
|
52 |
+
|
53 |
+
In 1999, two nine-year-old boys from Merrick, New York sued Nintendo because they claimed the Pokémon Trading Card Game caused their problematic gambling.[79]
|
54 |
+
|
55 |
+
In 2001, Saudi Arabia banned Pokémon games and the trading cards, alleging that the franchise promoted Zionism by displaying the Star of David in the trading cards (a six-pointed star is featured in the card game) as well as other religious symbols such as crosses they associated with Christianity and triangles they associated with Freemasonry; the games also involved gambling, which is in violation of Muslim doctrine.[80][81]
|
56 |
+
|
57 |
+
Pokémon has also been accused of promoting materialism.[82]
|
58 |
+
|
59 |
+
In 2012, PETA criticized the concept of Pokémon as supporting cruelty to animals. PETA compared the game's concept, of capturing animals and forcing them to fight, to cockfights, dog fighting rings and circuses, events frequently criticized for cruelty to animals. PETA released a game spoofing Pokémon where the Pokémon battle their trainers to win their freedom.[83] PETA reaffirmed their objections in 2016 with the release of Pokémon Go, promoting the hashtag #GottaFreeThemAll.[84]
|
60 |
+
|
61 |
+
On December 16, 1997, more than 635 Japanese children were admitted to hospitals with epileptic seizures.[85] It was determined the seizures were caused by watching an episode of Pokémon "Dennō Senshi Porygon", (most commonly translated "Electric Soldier Porygon", season 1, episode 38); as a result, this episode has not been aired since. In this particular episode, there were bright explosions with rapidly alternating blue and red color patterns.[86] It was determined in subsequent research that these strobing light effects cause some individuals to have epileptic seizures, even if the person had no previous history of epilepsy.[87] This incident is a common focus of Pokémon-related parodies in other media, and was lampooned by The Simpsons episode "Thirty Minutes over Tokyo"[88] and the South Park episode "Chinpokomon",[89] among others.
|
62 |
+
|
63 |
+
In March 2000, Morrison Entertainment Group, a toy developer based at Manhattan Beach, California, sued Nintendo over claims that Pokémon infringed on its own Monster in My Pocket characters. A judge ruled there was no infringement and Morrison appealed the ruling. On February 4, 2003, the U.S. Court of Appeals for the Ninth Circuit affirmed the decision by the District Court to dismiss the suit.[90]
|
64 |
+
|
65 |
+
Within its first two days of release, Pokémon Go raised safety concerns among players. Multiple people also suffered minor injuries from falling while playing the game due to being distracted.[91]
|
66 |
+
|
67 |
+
Multiple police departments in various countries have issued warnings, some tongue-in-cheek, regarding inattentive driving, trespassing, and being targeted by criminals due to being unaware of one's surroundings.[92][93] People have suffered various injuries from accidents related to the game,[94][95][96][97] and Bosnian players have been warned to stay out of minefields left over from the 1990s Bosnian War.[98] On July 20, 2016, it was reported that an 18-year-old boy in Chiquimula, Guatemala was shot and killed while playing the game in the late evening hours. This was the first reported death in connection with the app. The boy's 17-year-old cousin, who was accompanying the victim, was shot in the foot. Police speculated that the shooters used the game's GPS capability to find the two.[99]
|
68 |
+
|
69 |
+
Pokémon, being a globally popular franchise, has left a significant mark on today's popular culture. The various species of Pokémon have become pop culture icons; examples include two different Pikachu balloons in the Macy's Thanksgiving Day Parade, Pokémon-themed airplanes operated by All Nippon Airways, merchandise items, and a traveling theme park that was in Nagoya, Japan in 2005 and in Taipei in 2006. Pokémon also appeared on the cover of the U.S. magazine Time in 1999.[100] The Comedy Central show Drawn Together has a character named Ling-Ling who is a parody of Pikachu.[101] Several other shows such as The Simpsons,[102] South Park[103] and Robot Chicken[104] have made references and spoofs of Pokémon, among other series. Pokémon was featured on VH1's I Love the '90s: Part Deux. A live action show based on the anime called Pokémon Live! toured the United States in late 2000.[105] Jim Butcher cites Pokémon as one of the inspirations for the Codex Alera series of novels.[106]
|
70 |
+
|
71 |
+
Pokémon has even made its mark in the realm of science. This includes animals named after Pokémon, such as Stentorceps weedlei (named after the Pokémon Weedle for its resemblance) and Chilicola Charizard Monckton (named after the Pokémon Charizard).[107] There is also a protein named after Pikachu, called Pikachurin.
|
72 |
+
|
73 |
+
In November 2001, Nintendo opened a store called the Pokémon Center in New York, in Rockefeller Center,[108] modeled after the two other Pokémon Center stores in Tokyo and Osaka and named after a staple of the video game series. Pokémon Centers are fictional buildings where Trainers take their injured Pokémon to be healed after combat.[109] The store sold Pokémon merchandise on a total of two floors, with items ranging from collectible shirts to stuffed Pokémon plushies.[110] The store also featured a Pokémon Distributing Machine in which players would place their game to receive an egg of a Pokémon that was being given out at that time. The store also had tables that were open for players of the Pokémon Trading Card Game to duel each other or an employee. The store was closed and replaced by the Nintendo World Store on May 14, 2005.[111] Four Pokémon Center kiosks were put in malls in the Seattle area.[112] The Pokémon Center online store was relaunched on August 6, 2014.[113]
|
74 |
+
|
75 |
+
Professor of Education Joseph Tobin theorizes that the success of the franchise was due to the long list of names that could be learned by children and repeated in their peer groups. Its rich fictional universe provides opportunities for discussion and demonstration of knowledge in front of their peers. The names of the creatures were linked to its characteristics, which converged with the children's belief that names have symbolic power. Children can pick their favourite Pokémon and affirm their individuality while at the same time affirming their conformance to the values of the group, and they can distinguish themselves from others by asserting what they liked and what they did not like from every chapter. Pokémon gained popularity because it provides a sense of identity to a wide variety of children, and lost it quickly when many of those children found that the identity groups were too big and searched for identities that would distinguish them into smaller groups.[114]
|
76 |
+
|
77 |
+
Pokémon's history has been marked at times by rivalry with the Digimon media franchise that debuted at a similar time. Described as "the other 'mon'" by IGN's Juan Castro, Digimon has not enjoyed Pokémon's level of international popularity or success, but has maintained a dedicated fanbase.[115] IGN's Lucas M. Thomas stated that Pokémon is Digimon's "constant competition and comparison", attributing the former's relative success to the simplicity of its evolution mechanic as opposed to Digivolution.[116] The two have been noted for conceptual and stylistic similarities by sources such as GameZone.[117] A debate among fans exists over which of the two franchises came first.[118] In actuality, the first Pokémon media, Pokémon Red and Green, were released initially on February 27, 1996;[119] whereas the Digimon virtual pet was released on June 26, 1997.
|
78 |
+
|
79 |
+
While Pokémon's target demographic is children, early purchasers of Pokémon Omega Ruby and Alpha Sapphire were in their 20s.[120] Many fans are adults who originally played the games as children and had later returned to the series.[121]
|
80 |
+
|
81 |
+
Numerous fan sites exist for the Pokémon franchise, including Bulbapedia, a wiki-based encyclopedia,[122][123][124] and Serebii,[125] a news and reference website.[126] Other large fan communities exist on other platforms, such as the r/pokemon subreddit with over 2.2 million subscribers.[127]
|
82 |
+
|
83 |
+
A significant community around the Pokémon video games' metagame has existed for a long time, analyzing the best ways to use each Pokémon to their full potential in competitive battles. The most prolific competitive community is Smogon University, which has created a widely accepted tier-based battle system.[128]
|
84 |
+
Smogon is affiliated with an online Pokémon game called Pokémon Showdown, in which players create a team and battle against other players around the world using the competitive tiers created by Smogon.[129]
|
85 |
+
|
86 |
+
In early 2014, an anonymous video streamer on Twitch launched Twitch Plays Pokémon, an experiment trying to crowdsource playing subsequent Pokémon games, starting with Pokémon Red.[130][131]
|
87 |
+
|
88 |
+
A challenge called the Nuzlocke Challenge allows players to only capture the first Pokémon encountered in each area. If they do not succeed in capturing that Pokémon, there are no second chances. When a Pokémon faints, it is considered "dead" and must be released or stored in the PC permanently.[132] If the player faints, the game is considered over, and the player must restart.[133] The original idea consisted of 2 to 3 rules that the community has built upon. There are many fan made Pokémon games that contain a game mode similar to the Nuzlocke Challenge, such as Pokémon Uranium.[134]
|
89 |
+
|
90 |
+
A study at Stanford Neurosciences published in Nature performed magnetic resonance imaging scans of 11 Pokémon experts and 11 controls, finding that seeing Pokémon stimulated activity in the visual cortex, in a different place than is triggered by recognizing faces, places or words, demonstrating the brain's ability to create such specialized areas.[135]
|
en/4695.html.txt
ADDED
@@ -0,0 +1,42 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Poker is any of a number of card games in which players wager over which hand is best according to that specific game's rules in ways similar to these rankings. Often using a standard deck, poker games vary in deck configuration, the number of cards in play, the number dealt face up or face down, and the number shared by all players, but all have rules which involve one or more rounds of betting.
|
2 |
+
|
3 |
+
In most modern poker games the first round of betting begins with one or more of the players making some form of a forced bet (the blind or ante). In standard poker, each player bets according to the rank they believe their hand is worth as compared to the other players. The action then proceeds clockwise as each player in turn must either match (or "call") the maximum previous bet, or fold, losing the amount bet so far and all further involvement in the hand. A player who matches a bet may also "raise" (increase) the bet. The betting round ends when all players have either called the last bet or folded. If all but one player folds on any round, the remaining player collects the pot without being required to reveal their hand. If more than one player remains in contention after the final betting round, a showdown takes place where the hands are revealed, and the player with the winning hand takes the pot.
|
4 |
+
|
5 |
+
With the exception of initial forced bets, money is only placed into the pot voluntarily by a player who either believes the bet has positive expected value or who is trying to bluff other players for various strategic reasons. Thus, while the outcome of any particular hand significantly involves chance, the long-run expectations of the players are determined by their actions chosen on the basis of probability, psychology, and game theory.
|
6 |
+
|
7 |
+
Poker has increased in popularity since the beginning of the 20th century and has gone from being primarily a recreational activity confined to small groups of enthusiasts to a widely popular activity, both for participants and spectators, including online, with many professional players and multimillion-dollar tournament prizes.
|
8 |
+
|
9 |
+
Poker was developed sometime during the early 19th century in the United States. Since those early beginnings, the game has grown to become an extremely popular pastime worldwide.
|
10 |
+
|
11 |
+
In the 1937 edition of Foster's Complete Hoyle, R. F. Foster wrote: "the game of poker, as first played in the United States, five cards to each player from a twenty-card pack, is undoubtedly the Persian game of As-Nas." By the 1990s some gaming historians including David Parlett started to challenge the notion that poker is a direct derivative of As-Nas. Developments in the 1970s led to poker becoming far more popular than it was before. Modern tournament play became popular in American casinos after the World Series of Poker began, in 1970.[1]
|
12 |
+
|
13 |
+
Poker on television increased the popularity of the game during the turn of the millennium. This resulted in the poker boom a few years later between 2003–2006.
|
14 |
+
|
15 |
+
In casual play, the right to deal a hand typically rotates among the players and is marked by a token called a dealer button (or buck). In a casino, a house dealer handles the cards for each hand, but the button (typically a white plastic disk) is rotated clockwise among the players to indicate a nominal dealer to determine the order of betting. The cards are dealt clockwise around the poker table, one at a time.
|
16 |
+
|
17 |
+
One or more players are usually required to make forced bets, usually either an ante or a blind bet (sometimes both). The dealer shuffles the cards, the player on the chair to his or her right cuts, and the dealer deals the appropriate number of cards to the players one at a time, beginning with the player to his or her left. Cards may be dealt either face-up or face-down, depending on the variant of poker being played. After the initial deal, the first of what may be several betting rounds begins. Between rounds, the players' hands develop in some way, often by being dealt additional cards or replacing cards previously dealt. At the end of each round, all bets are gathered into the central pot.
|
18 |
+
|
19 |
+
At any time during a betting round, if one player bets, no opponents choose to call (match) the bet, and all opponents instead fold, the hand ends immediately, the bettor is awarded the pot, no cards are required to be shown, and the next hand begins. This is what makes bluffing possible. Bluffing is a primary feature of poker, one that distinguishes it from other vying games and from other games that make use of poker hand rankings.
|
20 |
+
|
21 |
+
At the end of the last betting round, if more than one player remains, there is a showdown, in which the players reveal their previously hidden cards and evaluate their hands. The player with the best hand according to the poker variant being played wins the pot. A poker hand comprises five cards; in variants where a player has more than five cards available to them, only the best five-card combination counts. There are 10 different kinds of poker hands such as straight flush, four of a kind etc.
|
22 |
+
|
23 |
+
Poker variations are played where a "high hand" or a "low hand" may be the best desired hand. In other words, when playing a poker variant with "low poker" the best hand is one that contains the lowest cards (and it can get further complicated by including or not including flushes and straights etc. from "high hand poker"). So while the "majority" of poker game variations are played "high hand", where the best high "straight, flush etc." wins, there are poker variations where the "worst hand" wins, such as "low ball, acey-deucey, high-lo split etc. game variations". To summarize, there can be variations that are "high poker", "low poker", and "high low split". In the case of "high low split" the pot is divided among the best high hand and low hand.
|
24 |
+
|
25 |
+
Poker has many variations,[2][3] all following a similar pattern of play[4] and generally using the same hand ranking hierarchy. There are four main families of variants, largely grouped by the protocol of card-dealing and betting:
|
26 |
+
|
27 |
+
Five Card Draw:
|
28 |
+
A complete hand is dealt to each player, face-down. Then each player must place an ante to the pot. They can then see their cards and bet accordingly. After betting, players can discard up to three cards and take new ones from the top of the deck. Then, another round of betting takes place. Finally, each player must show his or her cards and the player with the best hand wins.
|
29 |
+
|
30 |
+
Community card poker: Also known as "flop poker", community card poker is a variation of stud poker. Players are dealt an incomplete hand of face-down cards, and then a number of face-up community cards are dealt to the centre of the table, each of which can be used by one or more of the players to make a 5-card hand. Texas hold 'em and Omaha are two well-known variants of the community card family.
|
31 |
+
|
32 |
+
There are several methods for defining the structure of betting during a hand of poker. The three most common structures are known as "fixed-limit", "pot-limit", and "no-limit". In fixed-limit poker, betting and raising must be done by standardised amounts. For instance, if the required bet is X, an initial bettor may only bet X; if a player wishes to raise a bet, they may only raise by X. In pot-limit poker, a player may bet or raise any amount up to the size of the pot. When calculating the maximum raise allowed, all previous bets and calls, including the intending raiser's call, are first added to the pot. The raiser may then raise the previous bet by the full amount of the pot. In no-limit poker, a player may wager their entire betting stack at any point that they are allowed to make a bet. In all games, if a player does not have enough betting chips to fully match a bet, they may go "all-in", allowing them to show down their hand for the amount of chips they have remaining.
|
33 |
+
|
34 |
+
Other games that use poker hand rankings may likewise be referred to as poker. Video poker is a single-player video game that functions much like a slot machine; most video poker machines play draw poker, where the player bets, a hand is dealt, and the player can discard and replace cards. Payout is dependent on the hand resulting after the draw and the player's initial bet.
|
35 |
+
|
36 |
+
Strip poker is a traditional poker variation where players remove clothing when they lose bets. Since it depends only on the basic mechanic of betting in rounds, strip poker can be played with any form of poker; however, it is usually based on simple variants with few betting rounds, like five card draw.
|
37 |
+
|
38 |
+
Another game with the poker name, but with a vastly different mode of play, is called Acey-Deucey or Red Dog poker. This game is more similar to Blackjack in its layout and betting; each player bets against the house, and then is dealt two cards. For the player to win, the third card dealt (after an opportunity to raise the bet) must have a value in-between the first two. Payout is based on the odds that this is possible, based on the difference in values of the first two cards. Other poker-like games played at casinos against the house include three card poker and pai gow poker.
|
39 |
+
|
40 |
+
A variety of computer poker players have been developed by researchers at the University of Alberta, Carnegie Mellon University, and the University of Auckland amongst others.
|
41 |
+
|
42 |
+
In a January 2015 article[5] published in Science, a group of researchers mostly from the University of Alberta announced that they "essentially weakly solved" heads-up limit Texas Hold 'em with their development of their Cepheus poker bot. The authors claimed that Cepheus would lose at most 0.001 big blinds per game on average against its worst-case opponent, and the strategy is thus so "close to optimal" that "it can't be beaten with statistical significance within a lifetime of human poker playing".[6]
|
en/4696.html.txt
ADDED
@@ -0,0 +1,27 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
A polder (Dutch pronunciation: [ˈpɔldər] (listen)) is a low-lying tract of land that forms an artificial hydrological entity, enclosed by embankments known as dikes. The three types of polder are:
|
2 |
+
|
3 |
+
The ground level in drained marshes subsides over time. All polders will eventually be below the surrounding water level some or all of the time. Water enters the low-lying polder through infiltration and water pressure of groundwater, or rainfall, or transport of water by rivers and canals. This usually means that the polder has an excess of water, which is pumped out or drained by opening sluices at low tide. Care must be taken not to set the internal water level too low. Polder land made up of peat (former marshland) will sink in relation to its previous level, because of peat decomposing when exposed to oxygen from the air.
|
4 |
+
|
5 |
+
Polders are at risk from flooding at all times, and care must be taken to protect the surrounding dikes. Dikes are typically built with locally available materials, and each material has its own risks: sand is prone to collapse owing to saturation by water; dry peat is lighter than water and potentially unable to retain water in very dry seasons. Some animals dig tunnels in the barrier, allowing water to infiltrate the structure; the muskrat is known for this activity and hunted in certain European countries because of it. Polders are most commonly, though not exclusively, found in river deltas, former fenlands, and coastal areas.
|
6 |
+
|
7 |
+
Flooding of polders has also been used as a military tactic in the past. One example is the flooding of the polders along the Yser River during World War I. Opening the sluices at high tide and closing them at low tide turned the polders into an inaccessible swamp, which allowed the Allied armies to stop the German army.
|
8 |
+
|
9 |
+
The Dutch word polder derives successively from Middle Dutch polre, from Old Dutch polra, and ultimately from pol-, a piece of land elevated above its surroundings, with the augmentative suffix -er and epenthetical -d-. The word has been adopted in thirty-six languages.[1]
|
10 |
+
|
11 |
+
The Netherlands is frequently associated with polders, as its engineers became noted for developing techniques to drain wetlands and make them usable for agriculture and other development. This is illustrated by the saying "God created the world, but the Dutch created the Netherlands".[2]
|
12 |
+
|
13 |
+
The Dutch have a long history of reclamation of marshes and fenland, resulting in some 3,000 polders[3] nationwide. By 1961, about half of the country's land, 18,000 square kilometres (6,800 sq mi), was reclaimed from the sea.[4] About half the total surface area of polders in north-west Europe is in the Netherlands. The first embankments in Europe were constructed in Roman times. The first polders were constructed in the 11th century. The oldest stil existing polder is the Achtermeer polder, from 1533.
|
14 |
+
|
15 |
+
As a result of flooding disasters, water boards called waterschap (when situated more inland) or hoogheemraadschap (near the sea, mainly used in the Holland region)[5] [6] were set up to maintain the integrity of the water defences around polders, maintain the waterways inside a polder, and control the various water levels inside and outside the polder. Water boards hold separate elections, levy taxes, and function independently from other government bodies. Their function is basically unchanged even today. As such, they are the oldest democratic institutions in the country. The necessary cooperation among all ranks to maintain polder integrity gave its name to the Dutch version of third-way politics—the Polder Model.
|
16 |
+
|
17 |
+
The 1953 flood disaster prompted a new approach to the design of dikes and other water-retaining structures, based on an acceptable probability of overflowing. Risk is defined as the product of probability and consequences. The potential damage in lives, property, and rebuilding costs is compared with the potential cost of water defences. From these calculations follows an acceptable flood risk from the sea at one in 4,000–10,000 years, while it is one in 100–2,500 years for a river flood. The particular established policy guides the Dutch government to improve flood defences as new data on threat levels become available.
|
18 |
+
|
19 |
+
Major Dutch polders and the years they were laid dry include Beemster (1609-1612), Schermer (1633-1635), and Haarlemmermeerpolder (1852). Polders created as part of the Zuiderzee Works include Wieringermeerpolder (1930), Noordoostpolder (1942) and Flevopolder (1956-1968)
|
20 |
+
|
21 |
+
Bangladesh has 123 polders, of which 49 are sea-facing, while the rest are along the numerous distributaries of the Ganges-Brahmaputra-Meghna River delta. These were constructed in the 1960s to protect the coast from tidal flooding and reduce salinity incursion.[7] They reduce long-term flooding and waterlogging following storm surges from tropical cyclones. They are also cultivated for agriculture.[8]
|
22 |
+
|
23 |
+
The Jiangnan region, at the Yangtze River Delta, has a long history of constructing polders. Most of these projects were performed between the 10th and 13th centuries.[10] The Chinese government also assisted local communities in constructing dikes for swampland water drainage.[11] The Lijia (里甲) self-monitoring system of 110 households under a lizhang (里长) headman was used for the purposes of service administration and tax collection in the polder, with a liangzhang (粮长, grain chief) responsible for maintaining the water system and a tangzhang (塘长, dike chief)for polder maintenance.[12]
|
24 |
+
|
25 |
+
In Germany, land reclaimed by dyking is called a koog. The German Deichgraf system was similar to the Dutch and is widely known from Theodor Storm's novella The Rider on the White Horse.
|
26 |
+
|
27 |
+
In southern Germany, the term polder is used for retention basins recreated by opening dikes during river floodplain restoration, a meaning somewhat opposite to that in coastal context.
|
en/4697.html.txt
ADDED
@@ -0,0 +1,296 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Magnetism is a class of physical phenomena that are mediated by magnetic fields. Electric currents and the magnetic moments of elementary particles give rise to a magnetic field, which acts on other currents and magnetic moments. Magnetism is one aspect of the combined phenomenon of electromagnetism. The most familiar effects occur in ferromagnetic materials, which are strongly attracted by magnetic fields and can be magnetized to become permanent magnets, producing magnetic fields themselves. Demagnetizing a magnet is also possible. Only a few substances are ferromagnetic; the most common ones are iron, cobalt and nickel and their alloys. The prefix ferro- refers to iron, because permanent magnetism was first observed in lodestone, a form of natural iron ore called magnetite, Fe3O4.
|
2 |
+
|
3 |
+
All substances exhibit some type of magnetism. Ferromagnetism is responsible for most of the effects of magnetism encountered in everyday life, but there are actually several types of magnetism. Paramagnetic substances, such as aluminum and oxygen, are weakly attracted to an applied magnetic field; diamagnetic substances, such as copper and carbon, are weakly repelled; while antiferromagnetic materials, such as chromium and spin glasses, have a more complex relationship with a magnetic field. The force of a magnet on paramagnetic, diamagnetic, and antiferromagnetic materials is usually too weak to be felt and can be detected only by laboratory instruments, so in everyday life, these substances are often described as non-magnetic.
|
4 |
+
|
5 |
+
The magnetic state (or magnetic phase) of a material depends on temperature, pressure, and the applied magnetic field. A material may exhibit more than one form of magnetism as these variables change.
|
6 |
+
|
7 |
+
The strength of a magnetic field almost always decreases with distance, though the exact mathematical relationship between strength and distance varies. Different configurations of magnetic moments and electric currents can result in complicated magnetic fields.
|
8 |
+
|
9 |
+
Only magnetic dipoles have been observed, although some theories predict the existence of magnetic monopoles.
|
10 |
+
|
11 |
+
Magnetism was first discovered in the ancient world, when people noticed that lodestones, naturally magnetized pieces of the mineral magnetite, could attract iron.[1] The word magnet comes from the Greek term μαγνῆτις λίθος magnētis lithos,[2] "the Magnesian stone,[3] lodestone." In ancient Greece, Aristotle attributed the first of what could be called a scientific discussion of magnetism to the philosopher Thales of Miletus, who lived from about 625 BC to about 545 BC.[4] The ancient Indian medical text Sushruta Samhita describes using magnetite to remove arrows embedded in a person's body.[5]
|
12 |
+
|
13 |
+
In ancient China, the earliest literary reference to magnetism lies in a 4th-century BC book named after its author, The Sage of Ghost Valley.[6]
|
14 |
+
The 2nd-century BC annals, Lüshi Chunqiu, also notes:
|
15 |
+
"The lodestone makes iron approach, or it attracts it."[7]
|
16 |
+
The earliest mention of the attraction of a needle is in a 1st-century work Lunheng (Balanced Inquiries): "A lodestone attracts a needle."[8]
|
17 |
+
The 11th-century Chinese scientist Shen Kuo was the first person to write—in the Dream Pool Essays—of the magnetic needle compass and that it improved the accuracy of navigation by employing the astronomical concept of true north.
|
18 |
+
By the 12th century, the Chinese were known to use the lodestone compass for navigation. They sculpted a directional spoon from lodestone in such a way that the handle of the spoon always pointed south.
|
19 |
+
|
20 |
+
Alexander Neckam, by 1187, was the first in Europe to describe the compass and its use for navigation. In 1269, Peter Peregrinus de Maricourt wrote the Epistola de magnete, the first extant treatise describing the properties of magnets. In 1282, the properties of magnets and the dry compasses were discussed by Al-Ashraf, a Yemeni physicist, astronomer, and geographer.[9]
|
21 |
+
|
22 |
+
Leonardo Garzoni's only extant work, the Due trattati sopra la natura, e le qualità della calamita, is the first known example of a modern treatment of magnetic phenomena. Written in years near 1580 and never published, the treatise had a wide diffusion. In particular, Garzoni is referred to as an expert in magnetism by Niccolò Cabeo, whose Philosophia Magnetica (1629) is just a re-adjustment of Garzoni's work. Garzoni's treatise was known also to Giovanni Battista Della Porta and William Gilbert.
|
23 |
+
|
24 |
+
In 1600, William Gilbert published his De Magnete, Magneticisque Corporibus, et de Magno Magnete Tellure (On the Magnet and Magnetic Bodies, and on the Great Magnet the Earth). In this work he describes many of his experiments with his model earth called the terrella. From his experiments, he concluded that the Earth was itself magnetic and that this was the reason compasses pointed north (previously, some believed that it was the pole star (Polaris) or a large magnetic island on the north pole that attracted the compass).
|
25 |
+
|
26 |
+
An understanding of the relationship between electricity and magnetism began in 1819 with work by Hans Christian Ørsted, a professor at the University of Copenhagen, who discovered by the accidental twitching of a compass needle near a wire that an electric current could create a magnetic field. This landmark experiment is known as Ørsted's Experiment. Several other experiments followed, with André-Marie Ampère, who in 1820 discovered that the magnetic field circulating in a closed-path was related to the current flowing through a surface enclosed by the path; Carl Friedrich Gauss; Jean-Baptiste Biot and Félix Savart, both of whom in 1820 came up with the Biot–Savart law giving an equation for the magnetic field from a current-carrying wire; Michael Faraday, who in 1831 found that a time-varying magnetic flux through a loop of wire induced a voltage, and others finding further links between magnetism and electricity. James Clerk Maxwell synthesized and expanded these insights into Maxwell's equations, unifying electricity, magnetism, and optics into the field of electromagnetism. In 1905, Albert Einstein used these laws in motivating his theory of special relativity,[10] requiring that the laws held true in all inertial reference frames.
|
27 |
+
|
28 |
+
Electromagnetism has continued to develop into the 21st century, being incorporated into the more fundamental theories of gauge theory, quantum electrodynamics, electroweak theory, and finally the standard model.
|
29 |
+
|
30 |
+
Magnetism, at its root, arises from two sources:
|
31 |
+
|
32 |
+
The magnetic properties of materials are mainly due to the magnetic moments of their atoms' orbiting electrons. The magnetic moments of the nuclei of atoms are typically thousands of times smaller than the electrons' magnetic moments, so they are negligible in the context of the magnetization of materials. Nuclear magnetic moments are nevertheless very important in other contexts, particularly in nuclear magnetic resonance (NMR) and magnetic resonance imaging (MRI).
|
33 |
+
|
34 |
+
Ordinarily, the enormous number of electrons in a material are arranged such that their magnetic moments (both orbital and intrinsic) cancel out. This is due, to some extent, to electrons combining into pairs with opposite intrinsic magnetic moments as a result of the Pauli exclusion principle (see electron configuration), and combining into filled subshells with zero net orbital motion. In both cases, the electrons preferentially adopt arrangements in which the magnetic moment of each electron is canceled by the opposite moment of another electron. Moreover, even when the electron configuration is such that there are unpaired electrons and/or non-filled subshells, it is often the case that the various electrons in the solid will contribute magnetic moments that point in different, random directions so that the material will not be magnetic.
|
35 |
+
|
36 |
+
Sometimes, either spontaneously, or owing to an applied external magnetic field—each of the electron magnetic moments will be, on average, lined up. A suitable material can then produce a strong net magnetic field.
|
37 |
+
|
38 |
+
The magnetic behavior of a material depends on its structure, particularly its electron configuration, for the reasons mentioned above, and also on the temperature. At high temperatures, random thermal motion makes it more difficult for the electrons to maintain alignment.
|
39 |
+
|
40 |
+
Diamagnetism appears in all materials and is the tendency of a material to oppose an applied magnetic field, and therefore, to be repelled by a magnetic field. However, in a material with paramagnetic properties (that is, with a tendency to enhance an external magnetic field), the paramagnetic behavior dominates.[12] Thus, despite its universal occurrence, diamagnetic behavior is observed only in a purely diamagnetic material. In a diamagnetic material, there are no unpaired electrons, so the intrinsic electron magnetic moments cannot produce any bulk effect. In these cases, the magnetization arises from the electrons' orbital motions, which can be understood classically as follows:
|
41 |
+
|
42 |
+
When a material is put in a magnetic field, the electrons circling the nucleus will experience, in addition to their Coulomb attraction to the nucleus, a Lorentz force from the magnetic field. Depending on which direction the electron is orbiting, this force may increase the centripetal force on the electrons, pulling them in towards the nucleus, or it may decrease the force, pulling them away from the nucleus. This effect systematically increases the orbital magnetic moments that were aligned opposite the field and decreases the ones aligned parallel to the field (in accordance with Lenz's law). This results in a small bulk magnetic moment, with an opposite direction to the applied field.
|
43 |
+
|
44 |
+
This description is meant only as a heuristic; the Bohr-van Leeuwen theorem shows that diamagnetism is impossible according to classical physics, and that a proper understanding requires a quantum-mechanical description.
|
45 |
+
|
46 |
+
All materials undergo this orbital response. However, in paramagnetic and ferromagnetic substances, the diamagnetic effect is overwhelmed by the much stronger effects caused by the unpaired electrons.
|
47 |
+
|
48 |
+
In a paramagnetic material there are unpaired electrons; i.e., atomic or molecular orbitals with exactly one electron in them. While paired electrons are required by the Pauli exclusion principle to have their intrinsic ('spin') magnetic moments pointing in opposite directions, causing their magnetic fields to cancel out, an unpaired electron is free to align its magnetic moment in any direction. When an external magnetic field is applied, these magnetic moments will tend to align themselves in the same direction as the applied field, thus reinforcing it.
|
49 |
+
|
50 |
+
A ferromagnet, like a paramagnetic substance, has unpaired electrons. However, in addition to the electrons' intrinsic magnetic moment's tendency to be parallel to an applied field, there is also in these materials a tendency for these magnetic moments to orient parallel to each other to maintain a lowered-energy state. Thus, even in the absence of an applied field, the magnetic moments of the electrons in the material spontaneously line up parallel to one another.
|
51 |
+
|
52 |
+
Every ferromagnetic substance has its own individual temperature, called the Curie temperature, or Curie point, above which it loses its ferromagnetic properties. This is because the thermal tendency to disorder overwhelms the energy-lowering due to ferromagnetic order.
|
53 |
+
|
54 |
+
Ferromagnetism only occurs in a few substances; common ones are iron, nickel, cobalt, their alloys, and some alloys of rare-earth metals.
|
55 |
+
|
56 |
+
The magnetic moments of atoms in a ferromagnetic material cause them to behave something like tiny permanent magnets. They stick together and align themselves into small regions of more or less uniform alignment called magnetic domains or Weiss domains. Magnetic domains can be observed with a magnetic force microscope to reveal magnetic domain boundaries that resemble white lines in the sketch. There are many scientific experiments that can physically show magnetic fields.
|
57 |
+
|
58 |
+
When a domain contains too many molecules, it becomes unstable and divides into two domains aligned in opposite directions, so that they stick together more stably, as shown at the right.
|
59 |
+
|
60 |
+
When exposed to a magnetic field, the domain boundaries move, so that the domains aligned with the magnetic field grow and dominate the structure (dotted yellow area), as shown at the left. When the magnetizing field is removed, the domains may not return to an unmagnetized state. This results in the ferromagnetic material's being magnetized, forming a permanent magnet.
|
61 |
+
|
62 |
+
When magnetized strongly enough that the prevailing domain overruns all others to result in only one single domain, the material is magnetically saturated. When a magnetized ferromagnetic material is heated to the Curie point temperature, the molecules are agitated to the point that the magnetic domains lose the organization, and the magnetic properties they cause cease. When the material is cooled, this domain alignment structure spontaneously returns, in a manner roughly analogous to how a liquid can freeze into a crystalline solid.
|
63 |
+
|
64 |
+
In an antiferromagnet, unlike a ferromagnet, there is a tendency for the intrinsic magnetic moments of neighboring valence electrons to point in opposite directions. When all atoms are arranged in a substance so that each neighbor is anti-parallel, the substance is antiferromagnetic. Antiferromagnets have a zero net magnetic moment, meaning that no field is produced by them. Antiferromagnets are less common compared to the other types of behaviors and are mostly observed at low temperatures. In varying temperatures, antiferromagnets can be seen to exhibit diamagnetic and ferromagnetic properties.
|
65 |
+
|
66 |
+
In some materials, neighboring electrons prefer to point in opposite directions, but there is no geometrical arrangement in which each pair of neighbors is anti-aligned. This is called a spin glass and is an example of geometrical frustration.
|
67 |
+
|
68 |
+
Like ferromagnetism, ferrimagnets retain their magnetization in the absence of a field. However, like antiferromagnets, neighboring pairs of electron spins tend to point in opposite directions. These two properties are not contradictory, because in the optimal geometrical arrangement, there is more magnetic moment from the sublattice of electrons that point in one direction, than from the sublattice that points in the opposite direction.
|
69 |
+
|
70 |
+
Most ferrites are ferrimagnetic. The first discovered magnetic substance, magnetite, is a ferrite and was originally believed to be a ferromagnet; Louis Néel disproved this, however, after discovering ferrimagnetism.
|
71 |
+
|
72 |
+
When a ferromagnet or ferrimagnet is sufficiently small, it acts like a single magnetic spin that is subject to Brownian motion. Its response to a magnetic field is qualitatively similar to the response of a paramagnet, but much larger.
|
73 |
+
|
74 |
+
An electromagnet is a type of magnet in which the magnetic field is produced by an electric current.[13] The magnetic field disappears when the current is turned off. Electromagnets usually consist of a large number of closely spaced turns of wire that create the magnetic field. The wire turns are often wound around a magnetic core made from a ferromagnetic or ferrimagnetic material such as iron; the magnetic core concentrates the magnetic flux and makes a more powerful magnet.
|
75 |
+
|
76 |
+
The main advantage of an electromagnet over a permanent magnet is that the magnetic field can be quickly changed by controlling the amount of electric current in the winding. However, unlike a permanent magnet that needs no power, an electromagnet requires a continuous supply of current to maintain the magnetic field.
|
77 |
+
|
78 |
+
Electromagnets are widely used as components of other electrical devices, such as motors, generators, relays, solenoids, loudspeakers, hard disks, MRI machines, scientific instruments, and magnetic separation equipment. Electromagnets are also employed in industry for picking up and moving heavy iron objects such as scrap iron and steel.[14] Electromagnetism was discovered in 1820.[15]
|
79 |
+
|
80 |
+
As a consequence of Einstein's theory of special relativity, electricity and magnetism are fundamentally interlinked. Both magnetism lacking electricity, and electricity without magnetism, are inconsistent with special relativity, due to such effects as length contraction, time dilation, and the fact that the magnetic force is velocity-dependent. However, when both electricity and magnetism are taken into account, the resulting theory (electromagnetism) is fully consistent with special relativity.[10][16] In particular, a phenomenon that appears purely electric or purely magnetic to one observer may be a mix of both to another, or more generally the relative contributions of electricity and magnetism are dependent on the frame of reference. Thus, special relativity "mixes" electricity and magnetism into a single, inseparable phenomenon called electromagnetism, analogous to how relativity "mixes" space and time into spacetime.
|
81 |
+
|
82 |
+
All observations on electromagnetism apply to what might be considered to be primarily magnetism, e.g. perturbations in the magnetic field are necessarily accompanied by a nonzero electric field, and propagate at the speed of light.[citation needed]
|
83 |
+
|
84 |
+
In a vacuum,
|
85 |
+
|
86 |
+
where μ0 is the vacuum permeability.
|
87 |
+
|
88 |
+
In a material,
|
89 |
+
|
90 |
+
The quantity μ0M is called magnetic polarization.
|
91 |
+
|
92 |
+
If the field H is small, the response of the magnetization M in a diamagnet or paramagnet is approximately linear:
|
93 |
+
|
94 |
+
the constant of proportionality being called the magnetic susceptibility. If so,
|
95 |
+
|
96 |
+
In a hard magnet such as a ferromagnet, M is not proportional to the field and is generally nonzero even when H is zero (see Remanence).
|
97 |
+
|
98 |
+
The phenomenon of magnetism is "mediated" by the magnetic field. An electric current or magnetic dipole creates a magnetic field, and that field, in turn, imparts magnetic forces on other particles that are in the fields.
|
99 |
+
|
100 |
+
Maxwell's equations, which simplify to the Biot–Savart law in the case of steady currents, describe the origin and behavior of the fields that govern these forces. Therefore, magnetism is seen whenever electrically charged particles are in motion—for example, from movement of electrons in an electric current, or in certain cases from the orbital motion of electrons around an atom's nucleus. They also arise from "intrinsic" magnetic dipoles arising from quantum-mechanical spin.
|
101 |
+
|
102 |
+
The same situations that create magnetic fields—charge moving in a current or in an atom, and intrinsic magnetic dipoles—are also the situations in which a magnetic field has an effect, creating a force. Following is the formula for moving charge; for the forces on an intrinsic dipole, see magnetic dipole.
|
103 |
+
|
104 |
+
When a charged particle moves through a magnetic field B, it feels a Lorentz force F given by the cross product:[17]
|
105 |
+
|
106 |
+
where
|
107 |
+
|
108 |
+
Because this is a cross product, the force is perpendicular to both the motion of the particle and the magnetic field. It follows that the magnetic force does no work on the particle; it may change the direction of the particle's movement, but it cannot cause it to speed up or slow down. The magnitude of the force is
|
109 |
+
|
110 |
+
where
|
111 |
+
|
112 |
+
|
113 |
+
|
114 |
+
θ
|
115 |
+
|
116 |
+
|
117 |
+
{\displaystyle \theta }
|
118 |
+
|
119 |
+
is the angle between v and B.
|
120 |
+
|
121 |
+
One tool for determining the direction of the velocity vector of a moving charge, the magnetic field, and the force exerted is labeling the index finger "V", the middle finger "B", and the thumb "F" with your right hand. When making a gun-like configuration, with the middle finger crossing under the index finger, the fingers represent the velocity vector, magnetic field vector, and force vector, respectively. See also right-hand rule.
|
122 |
+
|
123 |
+
A very common source of magnetic field found in nature is a dipole, with a "South pole" and a "North pole", terms dating back to the use of magnets as compasses, interacting with the Earth's magnetic field to indicate North and South on the globe. Since opposite ends of magnets are attracted, the north pole of a magnet is attracted to the south pole of another magnet. The Earth's North Magnetic Pole (currently in the Arctic Ocean, north of Canada) is physically a south pole, as it attracts the north pole of a compass.
|
124 |
+
A magnetic field contains energy, and physical systems move toward configurations with lower energy. When diamagnetic material is placed in a magnetic field, a magnetic dipole tends to align itself in opposed polarity to that field, thereby lowering the net field strength. When ferromagnetic material is placed within a magnetic field, the magnetic dipoles align to the applied field, thus expanding the domain walls of the magnetic domains.
|
125 |
+
|
126 |
+
Since a bar magnet gets its ferromagnetism from electrons distributed evenly throughout the bar, when a bar magnet is cut in half, each of the resulting pieces is a smaller bar magnet. Even though a magnet is said to have a north pole and a south pole, these two poles cannot be separated from each other. A monopole—if such a thing exists—would be a new and fundamentally different kind of magnetic object. It would act as an isolated north pole, not attached to a south pole, or vice versa. Monopoles would carry "magnetic charge" analogous to electric charge. Despite systematic searches since 1931, as of 2010[update], they have never been observed, and could very well not exist.[18]
|
127 |
+
|
128 |
+
Nevertheless, some theoretical physics models predict the existence of these magnetic monopoles. Paul Dirac observed in 1931 that, because electricity and magnetism show a certain symmetry, just as quantum theory predicts that individual positive or negative electric charges can be observed without the opposing charge, isolated South or North magnetic poles should be observable. Using quantum theory Dirac showed that if magnetic monopoles exist, then one could explain the quantization of electric charge—that is, why the observed elementary particles carry charges that are multiples of the charge of the electron.
|
129 |
+
|
130 |
+
Certain grand unified theories predict the existence of monopoles which, unlike elementary particles, are solitons (localized energy packets). The initial results of using these models to estimate the number of monopoles created in the Big Bang contradicted cosmological observations—the monopoles would have been so plentiful and massive that they would have long since halted the expansion of the universe. However, the idea of inflation (for which this problem served as a partial motivation) was successful in solving this problem, creating models in which monopoles existed but were rare enough to be consistent with current observations.[19]
|
131 |
+
|
132 |
+
Some organisms can detect magnetic fields, a phenomenon known as magnetoception. Some materials in living things are ferromagnetic, though it is unclear if the magnetic properties serve a special function or are merely a byproduct of containing iron. For instance, chitons, a type of marine mollusk, produce magnetite to harden their teeth, and even humans produce magnetite in bodily tissue.[21] Magnetobiology studies the effects of magnetic fields on living organisms; fields naturally produced by an organism are known as biomagnetism. Many biological organisms are mostly made of water, and because water is diamagnetic, extremely strong magnetic fields can repel these living things.
|
133 |
+
|
134 |
+
While heuristic explanations based on classical physics can be formulated, diamagnetism, paramagnetism and ferromagnetism can only be fully explained using quantum theory.[22][23]
|
135 |
+
A successful model was developed already in 1927, by Walter Heitler and Fritz London, who derived, quantum-mechanically, how hydrogen molecules are formed from hydrogen atoms, i.e. from the atomic hydrogen orbitals
|
136 |
+
|
137 |
+
|
138 |
+
|
139 |
+
|
140 |
+
u
|
141 |
+
|
142 |
+
A
|
143 |
+
|
144 |
+
|
145 |
+
|
146 |
+
|
147 |
+
{\displaystyle u_{A}}
|
148 |
+
|
149 |
+
and
|
150 |
+
|
151 |
+
|
152 |
+
|
153 |
+
|
154 |
+
u
|
155 |
+
|
156 |
+
B
|
157 |
+
|
158 |
+
|
159 |
+
|
160 |
+
|
161 |
+
{\displaystyle u_{B}}
|
162 |
+
|
163 |
+
centered at the nuclei A and B, see below. That this leads to magnetism is not at all obvious, but will be explained in the following.
|
164 |
+
|
165 |
+
According to the Heitler–London theory, so-called two-body molecular
|
166 |
+
|
167 |
+
|
168 |
+
|
169 |
+
σ
|
170 |
+
|
171 |
+
|
172 |
+
{\displaystyle \sigma }
|
173 |
+
|
174 |
+
-orbitals are formed, namely the resulting orbital is:
|
175 |
+
|
176 |
+
Here the last product means that a first electron, r1, is in an atomic hydrogen-orbital centered at the second nucleus, whereas the second electron runs around the first nucleus. This "exchange" phenomenon is an expression for the quantum-mechanical property that particles with identical properties cannot be distinguished. It is specific not only for the formation of chemical bonds, but also for magnetism. That is, in this connection the term exchange interaction arises, a term which is essential for the origin of magnetism, and which is stronger, roughly by factors 100 and even by 1000, than the energies arising from the electrodynamic dipole-dipole interaction.
|
177 |
+
|
178 |
+
As for the spin function
|
179 |
+
|
180 |
+
|
181 |
+
|
182 |
+
χ
|
183 |
+
(
|
184 |
+
|
185 |
+
s
|
186 |
+
|
187 |
+
1
|
188 |
+
|
189 |
+
|
190 |
+
,
|
191 |
+
|
192 |
+
s
|
193 |
+
|
194 |
+
2
|
195 |
+
|
196 |
+
|
197 |
+
)
|
198 |
+
|
199 |
+
|
200 |
+
{\displaystyle \chi (s_{1},s_{2})}
|
201 |
+
|
202 |
+
, which is responsible for the magnetism, we have the already mentioned Pauli's principle, namely that a symmetric orbital (i.e. with the + sign as above) must be multiplied with an antisymmetric spin function (i.e. with a − sign), and vice versa. Thus:
|
203 |
+
|
204 |
+
I.e., not only
|
205 |
+
|
206 |
+
|
207 |
+
|
208 |
+
|
209 |
+
u
|
210 |
+
|
211 |
+
A
|
212 |
+
|
213 |
+
|
214 |
+
|
215 |
+
|
216 |
+
{\displaystyle u_{A}}
|
217 |
+
|
218 |
+
and
|
219 |
+
|
220 |
+
|
221 |
+
|
222 |
+
|
223 |
+
u
|
224 |
+
|
225 |
+
B
|
226 |
+
|
227 |
+
|
228 |
+
|
229 |
+
|
230 |
+
{\displaystyle u_{B}}
|
231 |
+
|
232 |
+
must be substituted by α and β, respectively (the first entity means "spin up", the second one "spin down"), but also the sign + by the − sign, and finally ri by the discrete values si (= ±½); thereby we have
|
233 |
+
|
234 |
+
|
235 |
+
|
236 |
+
α
|
237 |
+
(
|
238 |
+
+
|
239 |
+
1
|
240 |
+
|
241 |
+
/
|
242 |
+
|
243 |
+
2
|
244 |
+
)
|
245 |
+
=
|
246 |
+
β
|
247 |
+
(
|
248 |
+
−
|
249 |
+
1
|
250 |
+
|
251 |
+
/
|
252 |
+
|
253 |
+
2
|
254 |
+
)
|
255 |
+
=
|
256 |
+
1
|
257 |
+
|
258 |
+
|
259 |
+
{\displaystyle \alpha (+1/2)=\beta (-1/2)=1}
|
260 |
+
|
261 |
+
and
|
262 |
+
|
263 |
+
|
264 |
+
|
265 |
+
α
|
266 |
+
(
|
267 |
+
−
|
268 |
+
1
|
269 |
+
|
270 |
+
/
|
271 |
+
|
272 |
+
2
|
273 |
+
)
|
274 |
+
=
|
275 |
+
β
|
276 |
+
(
|
277 |
+
+
|
278 |
+
1
|
279 |
+
|
280 |
+
/
|
281 |
+
|
282 |
+
2
|
283 |
+
)
|
284 |
+
=
|
285 |
+
0
|
286 |
+
|
287 |
+
|
288 |
+
{\displaystyle \alpha (-1/2)=\beta (+1/2)=0}
|
289 |
+
|
290 |
+
. The "singlet state", i.e. the − sign, means: the spins are antiparallel, i.e. for the solid we have antiferromagnetism, and for two-atomic molecules one has diamagnetism. The tendency to form a (homoeopolar) chemical bond (this means: the formation of a symmetric molecular orbital, i.e. with the + sign) results through the Pauli principle automatically in an antisymmetric spin state (i.e. with the − sign). In contrast, the Coulomb repulsion of the electrons, i.e. the tendency that they try to avoid each other by this repulsion, would lead to an antisymmetric orbital function (i.e. with the − sign) of these two particles, and complementary to a symmetric spin function (i.e. with the + sign, one of the so-called "triplet functions"). Thus, now the spins would be parallel (ferromagnetism in a solid, paramagnetism in two-atomic gases).
|
291 |
+
|
292 |
+
The last-mentioned tendency dominates in the metals iron, cobalt and nickel, and in some rare earths, which are ferromagnetic. Most of the other metals, where the first-mentioned tendency dominates, are nonmagnetic (e.g. sodium, aluminium, and magnesium) or antiferromagnetic (e.g. manganese). Diatomic gases are also almost exclusively diamagnetic, and not paramagnetic. However, the oxygen molecule, because of the involvement of π-orbitals, is an exception important for the life-sciences.
|
293 |
+
|
294 |
+
The Heitler-London considerations can be generalized to the Heisenberg model of magnetism (Heisenberg 1928).
|
295 |
+
|
296 |
+
The explanation of the phenomena is thus essentially based on all subtleties of quantum mechanics, whereas the electrodynamics covers mainly the phenomenology.
|
en/4698.html.txt
ADDED
@@ -0,0 +1,261 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
The police are a constituted body of persons empowered by a state, with the aim to enforce the law, to ensure the safety, health and possessions of citizens, and to prevent crime and civil disorder.[1][2] Their lawful powers include arrest and the use of force legitimized by the state via the monopoly on violence. The term is most commonly associated with the police forces of a sovereign state that are authorized to exercise the police power of that state within a defined legal or territorial area of responsibility. Police forces are often defined as being separate from the military and other organizations involved in the defense of the state against foreign aggressors; however, gendarmerie are military units charged with civil policing.[3] Police forces are usually public sector services, funded through taxes.
|
4 |
+
|
5 |
+
Law enforcement is only part of policing activity.[4] Policing has included an array of activities in different situations, but the predominant ones are concerned with the preservation of order.[5] In some societies, in the late 18th and early 19th centuries, these developed within the context of maintaining the class system and the protection of private property.[6] Police forces have become ubiquitous in modern societies. Nevertheless, their role can be controversial, as some are involved to varying degrees in corruption, police brutality and the enforcement of authoritarian rule.
|
6 |
+
|
7 |
+
A police force may also be referred to as a police department, police service, constabulary, gendarmerie, crime prevention, protective services, law enforcement agency, civil guard or civic guard[disambiguation needed]. Members may be referred to as police officers, troopers, sheriffs, constables, rangers, peace officers or civic/civil guards. Ireland differs from other English-speaking countries by using the Irish language terms Garda (singular) and Gardaí (plural), for both the national police force and its members. The word "police" is the most universal and similar terms can be seen in many non-English speaking countries.[7]
|
8 |
+
|
9 |
+
Numerous slang terms exist for the police. Many slang terms for police officers are decades or centuries old with lost etymology. One of the oldest, "cop", has largely lost its slang connotations and become a common colloquial term used both by the public and police officers to refer to their profession.[8]
|
10 |
+
|
11 |
+
First attested in English in the early 15th century, initially in a range of senses encompassing '(public) policy; state; public order', the word police comes from Middle French police ('public order, administration, government'),[9] in turn from Latin politia,[10] which is the Latinisation of the Greek πολιτεία (politeia), "citizenship, administration, civil polity".[11] This is derived from πόλις (polis), "city".[12]
|
12 |
+
|
13 |
+
Law enforcement in ancient China was carried out by "prefects" for thousands of years since it developed in both the Chu and Jin kingdoms of the Spring and Autumn period. In Jin, dozens of prefects were spread across the state, each having limited authority and employment period. They were appointed by local magistrates, who reported to higher authorities such as governors, who in turn were appointed by the emperor, and they oversaw the civil administration of their "prefecture", or jurisdiction. Under each prefect were "subprefects" who helped collectively with law enforcement in the area. Some prefects were responsible for handling investigations, much like modern police detectives. Prefects could also be women.[13] Local citizens could report minor judicial offenses against them such as robberies at a local prefectural office. The concept of the "prefecture system" spread to other cultures such as Korea and Japan.
|
14 |
+
|
15 |
+
In Babylonia, law enforcement tasks were initially entrusted to individuals with military backgrounds or imperial magnates during the Old Babylonian period, but eventually, law enforcement was delegated to officers known as paqūdus, who were present in both cities and rural settlements. A paqūdu was responsible for investigating petty crimes and carrying out arrests.[14][15]
|
16 |
+
|
17 |
+
In ancient Egypt evidence of law enforcement exists as far back as the Old Kingdom period. There are records of an office known as "Judge Commandant of the Police" dating to the fourth dynasty.[16] During the fifth dynasty at the end of the Old Kingdom period, officers armed with wooden sticks were tasked with guarding public places such as markets, temples, and parks, and apprehending criminals. They are known to have made use of trained monkeys, baboons, and dogs in guard duties and catching criminals. After the Old Kingdom collapsed, ushering in the First Intermediate Period, it is thought that the same model applied. During this period, Bedouins were hired to guard the borders and protect trade caravans. During the Middle Kingdom period, a professional police force was created with a specific focus on enforcing the law, as opposed to the previous informal arrangement of using warriors as police. The police force was further reformed during the New Kingdom period. Police officers served as interrogators, prosecutors, and court bailiffs, and were responsible for administering punishments handed down by judges. In addition, there were special units of police officers trained as priests who were responsible for guarding temples and tombs and preventing inappropriate behavior at festivals or improper observation of religious rites during services. Other police units were tasked with guarding caravans, guarding border crossings, protecting royal necropolises, guarding slaves at work or during transport, patrolling the Nile River, and guarding administrative buildings. The police did not guard rural communities, which often took care of their own judicial problems by appealing to village elders, but many of them had a constable to enforce state laws.[17]
|
18 |
+
|
19 |
+
In ancient Greece, publicly owned slaves were used by magistrates as police. In Athens, a group of 300 Scythian slaves (the ῥαβδοῦχοι, "rod-bearers") was used to guard public meetings to keep order and for crowd control, and also assisted with dealing with criminals, handling prisoners, and making arrests. Other duties associated with modern policing, such as investigating crimes, were left to the citizens themselves.[18] In Sparta, a secret police force called the krypteia existed to watch the large population of helots, or slaves.[19]
|
20 |
+
|
21 |
+
In the Roman Empire, the army, rather than a dedicated police organization, initially provided security. Local watchmen were hired by cities to provide some extra security. Magistrates such as procurators fiscal and quaestors investigated crimes. There was no concept of public prosecution, so victims of crime or their families had to organize and manage the prosecution themselves. Under the reign of Augustus, when the capital had grown to almost one million inhabitants, 14 wards were created; the wards were protected by seven squads of 1,000 men called "vigiles", who acted as firemen and nightwatchmen. Their duties included apprehending thieves and robbers, capturing runaway slaves, guarding the baths at night, and stopping disturbances of the peace. The vigiles primarily dealt with petty crime, while violent crime, sedition, and rioting was handled by the Urban Cohorts and even the Praetorian Guard if necessary, though the vigiles could act in a supporting role in these situations.
|
22 |
+
|
23 |
+
Law enforcement systems existed in the various kingdoms and empires of ancient India. The Apastamba Dharmasutra prescribes that kings should appoint officers and subordinates in the towns and villages to protect their subjects from crime. Various inscriptions and literature from ancient India suggest that a variety of roles existed for law enforcement officials such as those of a constable, thief catcher, watchman, and detective.[20]
|
24 |
+
|
25 |
+
The Persian Empire had well-organized police forces. A police force existed in every place of importance. In the cities, each ward was under the command of a Superintendent of Police, known as a Kuipan, who was expected to command implicit obedience in his subordinates. Police officers also acted as prosecutors and carried out punishments imposed by the courts. They were required to know the court procedure for prosecuting cases and advancing accusations.[21]
|
26 |
+
|
27 |
+
In ancient Israel and Judah, officials with the responsibility of making declarations to the people, guarding the king's person, supervising public works, and executing the orders of the courts existed in the urban areas. They are repeatedly mentioned in the Hebrew Bible, and this system lasted into the period of Roman rule. The first century Jewish historian Josephus related that every judge had two such officers under his command. Levites were preferred for this role. Cities and towns also had night watchmen. Besides officers of the town, there were officers for every tribe. The temple in Jerusalem had special temple police to guard it. The Talmud also mentions various local police officials in the Jewish communities the Land of Israel and Babylon who supervised economic activity. Their Greek-sounding titles suggest that the roles were introduced under Hellenic influence. Most of these officials received their authority from local courts and their salaries were drawn from the town treasury. The Talmud also mentions city watchmen and mounted and armed watchmen in the suburbs.[22]
|
28 |
+
|
29 |
+
In many regions of pre-colonial Africa, particularly West and Central Africa, guild-like secret societies emerged as law enforcement. In the absence of a court system or written legal code, they carried out police-like activities, employing varying degrees of coercion to enforce conformity and deter antisocial behavior.[23] In ancient Ethiopia, armed retainers of the nobility enforced law in the countryside according to the will of their leaders. The Songhai Empire had officials known as assara-munidios, or "enforcers", acting as police.
|
30 |
+
|
31 |
+
Pre-Colombian Mesoamarican civilizations also had organized law enforcement. The city-states of the Maya civilization had constables known as tupils, as well as bailiffs.[24] In the Aztec Empire, judges had officers serving under them who were empowered to perform arrests, even of dignitaries.[25] In the Inca Empire, officials called curaca enforced the law among the households they were assigned to oversee, with inspectors known as tokoyrikoq (lit. "he who sees all") also stationed throughout the provinces to keep order.[26][27]
|
32 |
+
|
33 |
+
In medieval Spain, Santas Hermandades, or "holy brotherhoods", peacekeeping associations of armed individuals, were a characteristic of municipal life, especially in Castile. As medieval Spanish kings often could not offer adequate protection, protective municipal leagues began to emerge in the twelfth century against banditry and other rural criminals, and against the lawless nobility or to support one or another claimant to a crown.
|
34 |
+
|
35 |
+
These organizations were intended to be temporary, but became a long-standing fixture of Spain. The first recorded case of the formation of an hermandad occurred when the towns and the peasantry of the north united to police the pilgrim road to Santiago de Compostela in Galicia, and protect the pilgrims against robber knights.
|
36 |
+
|
37 |
+
Throughout the Middle Ages such alliances were frequently formed by combinations of towns to protect the roads connecting them, and were occasionally extended to political purposes. Among the most powerful was the league of North Castilian and Basque ports, the Hermandad de las marismas: Toledo, Talavera, and Villarreal.
|
38 |
+
|
39 |
+
As one of their first acts after end of the War of the Castilian Succession in 1479, Ferdinand II of Aragon and Isabella I of Castile established the centrally-organized and efficient Holy Brotherhood as a national police force. They adapted an existing brotherhood to the purpose of a general police acting under officials appointed by themselves, and endowed with great powers of summary jurisdiction even in capital cases. The original brotherhoods continued to serve as modest local police-units until their final suppression in 1835.
|
40 |
+
|
41 |
+
The Vehmic courts of Germany provided some policing in the absence of strong state institutions. Such courts had a chairman who presided over a session and lay judges who passed judgement and carried out law enforcement tasks. Among the responsibilities that lay judges had were giving formal warnings to known troublemakers, issuing warrants, and carrying out executions.
|
42 |
+
|
43 |
+
In the medieval Islamic Caliphates, police were known as Shurta. Bodies termed "Shurta" existed perhaps as early as the Caliphate of Uthman. It is known in the Abbasid and Umayyad Caliphates. Their primary roles were to act as police and internal security forces but could also be used for other duties such as customs and tax enforcement, rubbish collection, and acting as bodyguards for governors. From the 10th century, the importance of the Shurta declined as the army assumed internal security tasks while cities became more autonomous and handled their own policing needs locally, such as by hiring watchmen. In addition, officials called muhtasibs were responsible for supervising bazaars and economic activity in general in the medieval Islamic world.
|
44 |
+
|
45 |
+
In France during the Middle Ages, there were two Great Officers of the Crown of France with police responsibilities: The Marshal of France and the Grand Constable of France. The military policing responsibilities of the Marshal of France were delegated to the Marshal's provost, whose force was known as the Marshalcy because its authority ultimately derived from the Marshal. The marshalcy dates back to the Hundred Years' War, and some historians trace it back to the early 12th century. Another organisation, the Constabulary (French: Connétablie), was under the command of the Constable of France. The constabulary was regularised as a military body in 1337. Under Francis I of France (who reigned 1515–1547), the Maréchaussée was merged with the Constabulary. The resulting force was also known as the Maréchaussée, or, formally, the Constabulary and Marshalcy of France.
|
46 |
+
|
47 |
+
The English system of maintaining public order since the Norman conquest was a private system of tithings known as the mutual pledge system. This system was introduced under Alfred the Great. Communities were divided into groups of ten families called tithings, each of which was overseen by a chief tithingman. Every household head was responsible for the good behavior of his own family and the good behavior of other members of his tithing. Every male aged 12 and over was required to participate in a tithing. Members of tithings were responsible for raising "hue and cry" upon witnessing or learning of a crime, and the men of his tithing were responsible for capturing the criminal. The person the tithing captured would then be brought before the chief tithingman, who would determine guilt or innocence and punishment. All members of the criminal's tithing would be responsible for paying the fine. A group of ten tithings was known as a "hundred" and every hundred was overseen by an official known as a reeve. Hundreds ensured that if a criminal escaped to a neighboring village, he could be captured and returned to his village. If a criminal was not apprehended, then the entire hundred could be fined. The hundreds were governed by administrative divisions known as shires, the rough equivalent of a modern county, which were overseen by an official known as a shire-reeve, from which the term Sheriff evolved. The shire-reeve had the power of posse comitatus, meaning he could gather the men of his shire to pursue a criminal.[28] Following the Norman conquest of England in 1066, the tithing system was tightened with the frankpledge system. By the end of the 13th century, the office of constable developed. Constables had the same responsibilities as chief tithingmen and additionally as royal officers. The constable was elected by his parish every year. Eventually, constables became the first 'police' official to be tax-supported. In urban areas, watchmen were tasked with keeping order and enforcing nighttime curfew. Watchmen guarded the town gates at night, patrolled the streets, arrested those on the streets at night without good reason, and also acted as firefighters. Eventually the office of justice of the peace was established, with a justice of the peace overseeing constables.[29][30] There was also a system of investigative "juries".
|
48 |
+
|
49 |
+
The Assize of Arms of 1252, which required the appointment of constables to summon men to arms, quell breaches of the peace, and to deliver offenders to the sheriff or reeve, is cited as one of the earliest antecedents of the English police.[31] The Statute of Winchester of 1285 is also cited as the primary legislation regulating the policing of the country between the Norman Conquest and the Metropolitan Police Act 1829.[31][32]
|
50 |
+
|
51 |
+
From about 1500, private watchmen were funded by private individuals and organisations to carry out police functions. They were later nicknamed 'Charlies', probably after the reigning monarch King Charles II. Thief-takers were also rewarded for catching thieves and returning the stolen property.
|
52 |
+
|
53 |
+
The earliest English use of the word "police" seems to have been the term "Polles" mentioned in the book "The Second Part of the Institutes of the Lawes of England" published in 1642.[33]
|
54 |
+
|
55 |
+
The first centrally organised and uniformed police force was created by the government of King Louis XIV in 1667 to police the city of Paris, then the largest city in Europe. The royal edict, registered by the Parlement of Paris on March 15, 1667 created the office of lieutenant général de police ("lieutenant general of police"), who was to be the head of the new Paris police force, and defined the task of the police as "ensuring the peace and quiet of the public and of private individuals, purging the city of what may cause disturbances, procuring abundance, and having each and everyone live according to their station and their duties".
|
56 |
+
|
57 |
+
This office was first held by Gabriel Nicolas de la Reynie, who had 44 commissaires de police (police commissioners) under his authority. In 1709, these commissioners were assisted by inspecteurs de police (police inspectors). The city of Paris was divided into 16 districts policed by the commissaires, each assigned to a particular district and assisted by a growing bureaucracy. The scheme of the Paris police force was extended to the rest of France by a royal edict of October 1699, resulting in the creation of lieutenants general of police in all large French cities and towns.
|
58 |
+
|
59 |
+
After the French Revolution, Napoléon I reorganized the police in Paris and other cities with more than 5,000 inhabitants on February 17, 1800 as the Prefecture of Police. On March 12, 1829, a government decree created the first uniformed police in France, known as sergents de ville ("city sergeants"), which the Paris Prefecture of Police's website claims were the first uniformed policemen in the world.[34]
|
60 |
+
|
61 |
+
In 1737, George II began paying some London and Middlesex watchmen with tax monies, beginning the shift to government control. In 1749 Henry Fielding began organizing a force of quasi-professional constables known as the Bow Street Runners. The Macdaniel affair added further impetus for a publicly salaried police force that did not depend on rewards. Nonetheless, In 1828, there were privately financed police units in no fewer than 45 parishes within a 10-mile radius of London.
|
62 |
+
|
63 |
+
The word "police" was borrowed from French into the English language in the 18th century, but for a long time it applied only to French and continental European police forces. The word, and the concept of police itself, were "disliked as a symbol of foreign oppression" (according to Britannica 1911). Before the 19th century, the first use of the word "police" recorded in government documents in the United Kingdom was the appointment of Commissioners of Police for Scotland in 1714 and the creation of the Marine Police in 1798.
|
64 |
+
|
65 |
+
Following early police forces established in 1779 and 1788 in Glasgow, Scotland, the Glasgow authorities successfully petitioned the government to pass the Glasgow Police Act establishing the City of Glasgow Police in 1800.[35] Other Scottish towns soon followed suit and set up their own police forces through acts of parliament.[36] In Ireland, the Irish Constabulary Act of 1822 marked the beginning of the Royal Irish Constabulary. The Act established a force in each barony with chief constables and inspectors general under the control of the civil administration at Dublin Castle. By 1841 this force numbered over 8,600 men.
|
66 |
+
|
67 |
+
In 1797, Patrick Colquhoun was able to persuade the West Indies merchants who operated at the Pool of London on the River Thames, to establish a police force at the docks to prevent rampant theft that was causing annual estimated losses of £500,000 worth of cargo.[37] The idea of a police, as it then existed in France, was considered as a potentially undesirable foreign import. In building the case for the police in the face of England's firm anti-police sentiment, Colquhoun framed the political rationale on economic indicators to show that a police dedicated to crime prevention was "perfectly congenial to the principle of the British constitution". Moreover, he went so far as to praise the French system, which had reached "the greatest degree of perfection" in his estimation.[38]
|
68 |
+
|
69 |
+
With the initial investment of £4,200, the new trial force of the Thames River Police began with about 50 men charged with policing 33,000 workers in the river trades, of whom Colquhoun claimed 11,000 were known criminals and "on the game". The force was a success after its first year, and his men had "established their worth by saving £122,000 worth of cargo and by the rescuing of several lives". Word of this success spread quickly, and the government passed the Marine Police Bill on 28 July 1800, transforming it from a private to public police agency; now the oldest police force in the world. Colquhoun published a book on the experiment, The Commerce and Policing of the River Thames. It found receptive audiences far outside London, and inspired similar forces in other cities, notably, New York City, Dublin, and Sydney.[37]
|
70 |
+
|
71 |
+
Colquhoun's utilitarian approach to the problem – using a cost-benefit argument to obtain support from businesses standing to benefit – allowed him to achieve what Henry and John Fielding failed for their Bow Street detectives. Unlike the stipendiary system at Bow Street, the river police were full-time, salaried officers prohibited from taking private fees.[39] His other contribution was the concept of preventive policing; his police were to act as a highly visible deterrent to crime by their permanent presence on the Thames.[38] Colquhoun's innovations were a critical development leading up to Robert Peel's "new" police three decades later.[40]
|
72 |
+
|
73 |
+
London was fast reaching a size unprecedented in world history, due to the onset of the Industrial Revolution.[41] It became clear that the locally maintained system of volunteer constables and "watchmen" was ineffective, both in detecting and preventing crime. A parliamentary committee was appointed to investigate the system of policing in London. Upon Sir Robert Peel being appointed as Home Secretary in 1822, he established a second and more effective committee, and acted upon its findings.
|
74 |
+
|
75 |
+
Royal assent to the Metropolitan Police Act 1829 was given[42] and the Metropolitan Police Service was established on September 29, 1829 in London as the first modern and professional police force in the world.[43][44][45]
|
76 |
+
|
77 |
+
Peel, widely regarded as the father of modern policing,[46] was heavily influenced by the social and legal philosophy of Jeremy Bentham, who called for a strong and centralised, but politically neutral, police force for the maintenance of social order, for the protection of people from crime and to act as a visible deterrent to urban crime and disorder.[47] Peel decided to standardise the police force as an official paid profession, to organise it in a civilian fashion, and to make it answerable to the public.[48]
|
78 |
+
|
79 |
+
Due to public fears concerning the deployment of the military in domestic matters, Peel organised the force along civilian lines, rather than paramilitary. To appear neutral, the uniform was deliberately manufactured in blue, rather than red which was then a military colour, along with the officers being armed only with a wooden truncheon and a rattle to signal the need for assistance. Along with this, police ranks did not include military titles, with the exception of Sergeant.[49]
|
80 |
+
|
81 |
+
To distance the new police force from the initial public view of it as a new tool of government repression, Peel publicised the so-called Peelian principles, which set down basic guidelines for ethical policing:[50][51]
|
82 |
+
|
83 |
+
The 1829 Metropolitan Police Act created a modern police force by limiting the purview of the force and its powers, and envisioning it as merely an organ of the judicial system. Their job was apolitical; to maintain the peace and apprehend criminals for the courts to process according to the law.[52] This was very different from the "continental model" of the police force that had been developed in France, where the police force worked within the parameters of the absolutist state as an extension of the authority of the monarch and functioned as part of the governing state.
|
84 |
+
|
85 |
+
In 1863, the Metropolitan Police were issued with the distinctive custodian helmet, and in 1884 they switched to the use of whistles that could be heard from much further away.[53] The Metropolitan Police became a model for the police forces in many countries, such as the United States, and most of the British Empire.[54][55] Bobbies can still be found in many parts of the Commonwealth of Nations.
|
86 |
+
|
87 |
+
In Australia, the first police force having centralised command as well as jurisdiction over an entire colony was the South Australia Police, formed in 1838 under Henry Inman.
|
88 |
+
|
89 |
+
However, whilst the New South Wales Police Force was established in 1862, it was made up from a large number of policing and military units operating within the then Colony of New South Wales and traces its links back to the Royal Marines. The passing of the Police Regulation Act of 1862 essentially tightly regulated and centralised all of the police forces operating throughout the Colony of New South Wales.
|
90 |
+
|
91 |
+
The New South Wales Police Force remains the largest police force in Australia in terms of personnel and physical resources. It is also the only police force that requires its recruits to undertake university studies at the recruit level and has the recruit pay for their own education.
|
92 |
+
|
93 |
+
In 1566, the first police investigator of Rio de Janeiro was recruited. By the 17th century, most captaincies already had local units with law enforcement functions. On July 9, 1775 a Cavalry Regiment was created in the state of Minas Gerais for maintaining law and order. In 1808, the Portuguese royal family relocated to Brazil, because of the French invasion of Portugal. King João VI established the "Intendência Geral de Polícia" (General Police Intendancy) for investigations. He also created a Royal Police Guard for Rio de Janeiro in 1809. In 1831, after independence, each province started organizing its local "military police", with order maintenance tasks. The Federal Railroad Police was created in 1852, Federal Highway Police, was established in 1928, and Federal Police in 1967.
|
94 |
+
|
95 |
+
Established in 1729, the Royal Newfoundland Constabulary (RNC) was the first policing service founded in Canada. The establishment of modern policing services in the Canadas occurred during the 1830s, modelling their services after the London Metropolitan Police, and adopting the ideas of the Peelian principles.[56] The Toronto Police Service was established in 1834, whereas the Service de police de la Ville de Québec was established in 1840.[56]
|
96 |
+
|
97 |
+
A national police service, the Dominion Police, was founded in 1868. Initially the Dominion Police provided security for parliament, but its responsibilities quickly grew. In 1870, Rupert's Land and the North-Western Territory were incorporated into the country. In an effort to police its newly acquired territory, the Canadian government established the North-West Mounted Police in 1873 (renamed Royal North-West Mounted Police in 1904).[56] In 1920, the Dominion Police, and the Royal Northwest Mounted Police were amalgamated into the Royal Canadian Mounted Police (RCMP).[56]
|
98 |
+
|
99 |
+
The RCMP provides federal law enforcement; and law enforcement in eight provinces, and all three territories. The provinces of Ontario, and Quebec maintain their own provincial police forces, the Ontario Provincial Police (OPP), and the Sûreté du Québec (SQ). Policing in Newfoundland and Labrador is provided by the RCMP, and the RNC. The aforementioned services also provides municipal policing, although larger Canadian municipalities may establish form their own police service.
|
100 |
+
|
101 |
+
In Lebanon, modern police were established in 1861, with creation of the Gendarmerie.[57]
|
102 |
+
|
103 |
+
In India, the police are under the control of respective States and union territories and is known to be under State Police Services (SPS). The candidates selected for the SPS are usually posted as Deputy Superintendent of Police or Assistant Commissioner of Police once their probationary period ends. On prescribed satisfactory service in the SPS, the officers are nominated to the Indian Police Service.[58] The service color is usually dark blue and red, while the uniform color is Khaki.[59]
|
104 |
+
|
105 |
+
In British North America, policing was initially provided by local elected officials. For instance, the New York Sheriff's Office was founded in 1626, and the Albany County Sheriff's Department in the 1660s. In the colonial period, policing was provided by elected sheriffs and local militias.
|
106 |
+
|
107 |
+
In the 1700s, the Province of Carolina (later North- and South Carolina) established slave patrols in order to prevent slave rebellions and enslaved people from escaping.[60][61] For example, by 1785 the Charleston Guard and Watch had "a distinct chain of command, uniforms, sole responsibility for policing, salary, authorized use of force, and a focus on preventing 'crime'."[62]
|
108 |
+
|
109 |
+
In 1789 the United States Marshals Service was established, followed by other federal services such as the U.S. Parks Police (1791)[63] and U.S. Mint Police (1792).[64] The first city police services were established in Philadelphia in 1751,[65] Richmond, Virginia in 1807,[66] Boston in 1838,[67] and New York in 1845.[68] The U.S. Secret Service was founded in 1865 and was for some time the main investigative body for the federal government.[69]
|
110 |
+
|
111 |
+
In the American Old West, law enforcement was carried out by local sheriffs, rangers, constables, and federal marshals. There were also town marshals responsible for serving civil and criminal warrants, maintaining the jails, and carrying out arrests for petty crime.[70][71]
|
112 |
+
|
113 |
+
In recent years, in addition to federal, state, and local forces, some special districts have been formed to provide extra police protection in designated areas. These districts may be known as neighborhood improvement districts, crime prevention districts, or security districts.[72]
|
114 |
+
|
115 |
+
Michel Foucault claims that the contemporary concept of police as a paid and funded functionary of the state was developed by German and French legal scholars and practitioners in Public administration and Statistics in the 17th and early 18th centuries, most notably with Nicolas Delamare's Traité de la Police ("Treatise on the Police"), first published in 1705. The German Polizeiwissenschaft (Science of Police) first theorized by Philipp von Hörnigk a 17th-century Austrian Political economist and civil servant and much more famously by Johann Heinrich Gottlob Justi who produced an important theoretical work known as Cameral science on the formulation of police.[73] Foucault cites Magdalene Humpert author of Bibliographie der Kameralwissenschaften (1937) in which the author makes note of a substantial bibliography was produced of over 4000 pieces of the practice of Polizeiwissenschaft however, this maybe a mistranslation of Foucault's own work the actual source of Magdalene Humpert states over 14,000 items were produced from the 16th century dates ranging from 1520–1850.[74][75]
|
116 |
+
|
117 |
+
As conceptualized by the Polizeiwissenschaft, according to Foucault the police had an administrative, economic and social duty ("procuring abundance"). It was in charge of demographic concerns and needed to be incorporated within the western political philosophy system of raison d'état and therefore giving the superficial appearance of empowering the population (and unwittingly supervising the population), which, according to mercantilist theory, was to be the main strength of the state. Thus, its functions largely overreached simple law enforcement activities and included public health concerns, urban planning (which was important because of the miasma theory of disease; thus, cemeteries were moved out of town, etc.), and surveillance of prices.[76]
|
118 |
+
|
119 |
+
The concept of preventive policing, or policing to deter crime from taking place, gained influence in the late 18th century. Police Magistrate John Fielding, head of the Bow Street Runners, argued that "...it is much better to prevent even one man from being a rogue than apprehending and bringing forty to justice."[77]
|
120 |
+
|
121 |
+
The Utilitarian philosopher, Jeremy Bentham, promoted the views of Italian Marquis Cesare Beccaria, and disseminated a translated version of "Essay on Crime in Punishment". Bentham espoused the guiding principle of "the greatest good for the greatest number:
|
122 |
+
|
123 |
+
It is better to prevent crimes than to punish them. This is the chief aim of every good system of legislation, which is the art of leading men to the greatest possible happiness or to the least possible misery, according to calculation of all the goods and evils of life.[77]
|
124 |
+
|
125 |
+
Patrick Colquhoun's influential work, A Treatise on the Police of the Metropolis (1797) was heavily influenced by Benthamite thought. Colquhoun's Thames River Police was founded on these principles, and in contrast to the Bow Street Runners, acted as a deterrent by their continual presence on the riverfront, in addition to being able to intervene if they spotted a crime in progress.[78]
|
126 |
+
|
127 |
+
Edwin Chadwick's 1829 article, "Preventive police" in the London Review,[79] argued that prevention ought to be the primary concern of a police body, which was not the case in practice. The reason, argued Chadwick, was that "A preventive police would act more immediately by placing difficulties in obtaining the objects of temptation." In contrast to a deterrent of punishment, a preventive police force would deter criminality by making crime cost-ineffective – "crime doesn't pay". In the second draft of his 1829 Police Act, the "object" of the new Metropolitan Police, was changed by Robert Peel to the "principal object," which was the "prevention of crime."[80] Later historians would attribute the perception of England's "appearance of orderliness and love of public order" to the preventive principle entrenched in Peel's police system.[81]
|
128 |
+
|
129 |
+
Development of modern police forces around the world was contemporary to the formation of the state, later defined by sociologist Max Weber as achieving a "monopoly on the legitimate use of physical force" and which was primarily exercised by the police and the military. Marxist theory situates the development of the modern state as part of the rise of capitalism, in which the police are one component of the bourgeoisie's repressive apparatus for subjugating the working class. By contrast, the Peelian principles argue that "the power of the police...is dependent on public approval of their existence, actions and behavior", a philosophy known as policing by consent.
|
130 |
+
|
131 |
+
Police forces include both preventive (uniformed) police and detectives. Terminology varies from country to country. Police functions include protecting life and property, enforcing criminal law, criminal investigations, regulating traffic, crowd control, public safety duties, civil defense, emergency management, searching for missing persons, lost property and other duties concerned with public order. Regardless of size, police forces are generally organized as a hierarchy with multiple ranks. The exact structures and the names of rank vary considerably by country.
|
132 |
+
|
133 |
+
See also: Uniform#Police
|
134 |
+
|
135 |
+
The police who wear uniforms make up the majority of a police service's personnel. Their main duty is to respond to calls to the emergency telephone number. When not responding to these call-outs, they will do work aimed at preventing crime, such as patrols. The uniformed police are known by varying names such as preventive police, the uniform branch/division, administrative police, order police, the patrol bureau/division or patrol. In Australia and the United Kingdom, patrol personnel are also known as "general duties" officers.[82] Atypically, Brazil's preventive police are known as Military Police.[83]
|
136 |
+
|
137 |
+
As implied by the name, uniformed police wear uniforms. They perform functions that require an immediate recognition of an officer's legal authority and a potential need for force. Most commonly this means intervening to stop a crime in progress and securing the scene of a crime that has already happened. Besides dealing with crime, these officers may also manage and monitor traffic, carry out community policing duties, maintain order at public events or carry out searches for missing people (in 2012, the latter accounted for 14% of police time in the United Kingdom).[84] As most of these duties must be available as a 24/7 service, uniformed police are required to do shift work.
|
138 |
+
|
139 |
+
Police detectives are responsible for investigations and detective work. Detectives may be called Investigations Police, Judiciary/Judicial Police, and Criminal Police. In the UK, they are often referred to by the name of their department, the Criminal Investigation Department (CID). Detectives typically make up roughly 15–25% of a police service's personnel.
|
140 |
+
|
141 |
+
Detectives, in contrast to uniformed police, typically wear 'business attire' in bureaucratic and investigative functions where a uniformed presence would be either a distraction or intimidating, but a need to establish police authority still exists. "Plainclothes" officers dress in attire consistent with that worn by the general public for purposes of blending in.
|
142 |
+
|
143 |
+
In some cases, police are assigned to work "undercover", where they conceal their police identity to investigate crimes, such as organized crime or narcotics crime, that are unsolvable by other means. In some cases this type of policing shares aspects with espionage.
|
144 |
+
|
145 |
+
The relationship between detective and uniformed branches varies by country. In the United States, there is high variation within the country itself. Many US police departments require detectives to spend some time on temporary assignments in the patrol division.[citation needed] The argument is that rotating officers helps the detectives to better understand the uniformed officers' work, to promote cross-training in a wider variety of skills, and prevent "cliques" that can contribute to corruption or other unethical behavior.[citation needed] Conversely, some countries regard detective work as being an entirely separate profession, with detectives working in separate agencies and recruited without having to serve in uniform. A common compromise in English-speaking countries is that most detectives are recruited from the uniformed branch, but once qualified they tend to spend the rest of their careers in the detective branch.
|
146 |
+
|
147 |
+
Another point of variation is whether detectives have extra status. In some forces, such as the New York Police Department and Philadelphia Police Department, a regular detective holds a higher rank than a regular police officer. In others, such as British police forces and Canadian police forces, a regular detective has equal status with regular uniformed officers. Officers still have to take exams to move to the detective branch, but the move is regarded as being a specialization, rather than a promotion.
|
148 |
+
|
149 |
+
Police services often include part-time or volunteer officers, some of whom have other jobs outside policing. These may be paid positions or entirely volunteer. These are known by a variety of names, such as reserves, auxiliary police or special constables.
|
150 |
+
|
151 |
+
Other volunteer organizations work with the police and perform some of their duties. Groups in the U.S. including Retired and Senior Volunteer Program, Community Emergency Response Team and the Boy Scout's Police Explorers provide training, traffic and crowd control, disaster response and other policing duties. In the U.S., the Volunteers in Police Service program assists over 200,000 volunteers in almost 2,000 programs.[85] Volunteers may also work on the support staff. Examples of these schemes are Volunteers in Police Service in the US, Police Support Volunteers in the UK and Volunteers in Policing in New South Wales.
|
152 |
+
|
153 |
+
Specialized preventive and detective groups, or Specialist Investigation Departments exist within many law enforcement organizations either for dealing with particular types of crime, such as traffic law enforcement, K9, crash investigation, homicide, or fraud; or for situations requiring specialized skills, such as underwater search, aviation, explosive device disposal ("bomb squad"), and computer crime.
|
154 |
+
|
155 |
+
Most larger jurisdictions also employ specially selected and trained quasi-military units armed with military-grade weapons for the purposes of dealing with particularly violent situations beyond the capability of a patrol officer response, including high-risk warrant service and barricaded suspects. In the United States these units go by a variety of names, but are commonly known as SWAT (Special Weapons And Tactics) teams.
|
156 |
+
|
157 |
+
In counterinsurgency-type campaigns, select and specially trained units of police armed and equipped as light infantry have been designated as police field forces who perform paramilitary-type patrols and ambushes whilst retaining their police powers in areas that were highly dangerous.[86]
|
158 |
+
|
159 |
+
Because their situational mandate typically focuses on removing innocent bystanders from dangerous people and dangerous situations, not violent resolution, they are often equipped with non-lethal tactical tools like chemical agents, "flashbang" and concussion grenades, and rubber bullets. The Specialist Firearms Command (CO19)[87] of the Metropolitan Police in London is a group of armed police used in dangerous situations including hostage taking, armed robbery/assault and terrorism.
|
160 |
+
|
161 |
+
Police may have administrative duties that are not directly related to enforcing the law, such as issuing firearms licenses. The extent that police have these functions varies among countries, with police in France, Germany, and other continental European countries handling such tasks to a greater extent than British counterparts.[82]
|
162 |
+
|
163 |
+
Military police may refer to:
|
164 |
+
|
165 |
+
Some Islamic societies have religious police, who enforce the application of Islamic Sharia law. Their authority may include the power to arrest unrelated men and women caught socializing, anyone engaged in homosexual behavior or prostitution; to enforce Islamic dress codes, and store closures during Islamic prayer time.[88][89]
|
166 |
+
|
167 |
+
They enforce Muslim dietary laws, prohibit the consumption or sale of alcoholic beverages and pork, and seize banned consumer products and media regarded as un-Islamic, such as CDs/DVDs of various Western musical groups, television shows and film.[88][89] In Saudi Arabia, the Mutaween actively prevent the practice or proselytizing of non-Islamic religions within Saudi Arabia, where they are banned.[88][89]
|
168 |
+
|
169 |
+
Most countries are members of the International Criminal Police Organization (Interpol), established to detect and fight transnational crime and provide for international co-operation and co-ordination of other police activities, such as notifying relatives of the death of foreign nationals. Interpol does not conduct investigations or arrests by itself, but only serves as a central point for information on crime, suspects and criminals. Political crimes are excluded from its competencies.
|
170 |
+
|
171 |
+
The terms international policing, transnational policing, and/or global policing began to be used from the early 1990s onwards to describe forms of policing that transcended the boundaries of the sovereign nation-state (Nadelmann, 1993),[90] (Sheptycki, 1995).[91] These terms refer in variable ways to practices and forms for policing that, in some sense, transcend national borders. This includes a variety of practices, but international police cooperation, criminal intelligence exchange between police agencies working in different nation-states, and police development-aid to weak, failed or failing states are the three types that have received the most scholarly attention.
|
172 |
+
|
173 |
+
Historical studies reveal that policing agents have undertaken a variety of cross-border police missions for many years (Deflem, 2002).[92] For example, in the 19th century a number of European policing agencies undertook cross-border surveillance because of concerns about anarchist agitators and other political radicals. A notable example of this was the occasional surveillance by Prussian police of Karl Marx during the years he remained resident in London. The interests of public police agencies in cross-border co-operation in the control of political radicalism and ordinary law crime were primarily initiated in Europe, which eventually led to the establishment of Interpol before the Second World War. There are also many interesting examples of cross-border policing under private auspices and by municipal police forces that date back to the 19th century (Nadelmann, 1993).[90] It has been established that modern policing has transgressed national boundaries from time to time almost from its inception. It is also generally agreed that in the post–Cold War era this type of practice became more significant and frequent (Sheptycki, 2000).[93]
|
174 |
+
|
175 |
+
Not a lot of empirical work on the practices of inter/transnational information and intelligence sharing has been undertaken. A notable exception is James Sheptycki's study of police cooperation in the English Channel region (2002),[94] which provides a systematic content analysis of information exchange files and a description of how these transnational information and intelligence exchanges are transformed into police case-work. The study showed that transnational police information sharing was routinized in the cross-Channel region from 1968 on the basis of agreements directly between the police agencies and without any formal agreement between the countries concerned. By 1992, with the signing of the Schengen Treaty, which formalized aspects of police information exchange across the territory of the European Union, there were worries that much, if not all, of this intelligence sharing was opaque, raising questions about the efficacy of the accountability mechanisms governing police information sharing in Europe (Joubert and Bevers, 1996).[95]
|
176 |
+
|
177 |
+
Studies of this kind outside of Europe are even rarer, so it is difficult to make generalizations, but one small-scale study that compared transnational police information and intelligence sharing practices at specific cross-border locations in North America and Europe confirmed that low visibility of police information and intelligence sharing was a common feature (Alain, 2001).[96] Intelligence-led policing is now common practice in most advanced countries (Ratcliffe, 2007)[97] and it is likely that police intelligence sharing and information exchange has a common morphology around the world (Ratcliffe, 2007).[97] James Sheptycki has analyzed the effects of the new information technologies on the organization of policing-intelligence and suggests that a number of 'organizational pathologies' have arisen that make the functioning of security-intelligence processes in transnational policing deeply problematic. He argues that transnational police information circuits help to "compose the panic scenes of the security-control society".[98] The paradoxical effect is that, the harder policing agencies work to produce security, the greater are feelings of insecurity.
|
178 |
+
|
179 |
+
Police development-aid to weak, failed or failing states is another form of transnational policing that has garnered attention. This form of transnational policing plays an increasingly important role in United Nations peacekeeping and this looks set to grow in the years ahead, especially as the international community seeks to develop the rule of law and reform security institutions in States recovering from conflict (Goldsmith and Sheptycki, 2007)[99] With transnational police development-aid the imbalances of power between donors and recipients are stark and there are questions about the applicability and transportability of policing models between jurisdictions (Hills, 2009).[100]
|
180 |
+
|
181 |
+
Perhaps the greatest question regarding the future development of transnational policing is: in whose interest is it?[citation needed] At a more practical level, the question translates into one about how to make transnational policing institutions democratically accountable (Sheptycki, 2004).[101] For example, according to the Global Accountability Report for 2007 (Lloyd, et al. 2007) Interpol had the lowest scores in its category (IGOs), coming in tenth with a score of 22% on overall accountability capabilities (p. 19).[102] As this report points out, and the existing academic literature on transnational policing seems to confirm, this is a secretive area and one not open to civil society involvement.[citation needed]
|
182 |
+
|
183 |
+
In many jurisdictions, police officers carry firearms, primarily handguns, in the normal course of their duties. In the United Kingdom (except Northern Ireland), Iceland, Ireland, Norway, New Zealand,[103] and Malta, with the exception of specialist units, officers do not carry firearms as a matter of course. Norwegian police carry firearms in their vehicles, but not on their duty belts, and must obtain authorisation before the weapons can be removed from the vehicle.
|
184 |
+
|
185 |
+
Police often have specialist units for handling armed offenders, and similar dangerous situations, and can (depending on local laws), in some extreme circumstances, call on the military (since Military Aid to the Civil Power is a role of many armed forces). Perhaps the most high-profile example of this was, in 1980 the Metropolitan Police handing control of the Iranian Embassy Siege to the Special Air Service.
|
186 |
+
|
187 |
+
They can also be armed with non-lethal (more accurately known as "less than lethal" or "less-lethal" given that they can still be deadly[104]) weaponry, particularly for riot control. Non-lethal weapons include batons, tear gas, riot control agents, rubber bullets, riot shields, water cannons and electroshock weapons. Police officers typically carry handcuffs to restrain suspects. The use of firearms or deadly force is typically a last resort only to be used when necessary to save human life, although some jurisdictions (such as Brazil) allow its use against fleeing felons and escaped convicts. American police are allowed to use deadly force simply if they "think their life is in danger."[105] A "shoot-to-kill" policy was recently introduced in South Africa, which allows police to use deadly force against any person who poses a significant threat to them or civilians.[106] With the country having one of the highest rates of violent crime, president Jacob Zuma states that South Africa needs to handle crime differently from other countries.[107]
|
188 |
+
|
189 |
+
Modern police forces make extensive use of two-way radio communications equipment, carried both on the person and installed in vehicles, to co-ordinate their work, share information, and get help quickly. In recent years, vehicle-installed mobile data terminals have enhanced the ability of police communications, enabling easier dispatching of calls, criminal background checks on persons of interest to be completed in a matter of seconds, and updating officers' daily activity log and other, required reports on a real-time basis. Other common pieces of police equipment include flashlights/torches, whistles, police notebooks and "ticket books" or citations. Some police departments have developed advanced computerized data display and communication systems to bring real time data to officers, one example being the NYPD's Domain Awareness System.
|
190 |
+
|
191 |
+
Police vehicles are used for detaining, patrolling and transporting. The average police patrol vehicle is a specially modified, four door sedan (saloon in British English). Police vehicles are usually marked with appropriate logos and are equipped with sirens and flashing light bars to aid in making others aware of police presence.
|
192 |
+
|
193 |
+
Unmarked vehicles are used primarily for sting operations or apprehending criminals without alerting them to their presence. Some police forces use unmarked or minimally marked cars for traffic law enforcement, since drivers slow down at the sight of marked police vehicles and unmarked vehicles make it easier for officers to catch speeders and traffic violators. This practice is controversial, with for example, New York State banning this practice in 1996 on the grounds that it endangered motorists who might be pulled over by people impersonating police officers.[108]
|
194 |
+
|
195 |
+
Motorcycles are also commonly used, particularly in locations that a car may not be able to reach, to control potential public order situations involving meetings of motorcyclists and often in escort duties where motorcycle police officers can quickly clear a path for escorted vehicles. Bicycle patrols are used in some areas because they allow for more open interaction with the public. Bicycles are also commonly used by riot police to create makeshift barricades against protesters.[109] In addition, their quieter operation can facilitate approaching suspects unawares and can help in pursuing them attempting to escape on foot.
|
196 |
+
|
197 |
+
Police forces use an array of specialty vehicles such as helicopters, airplanes, watercraft, mobile command posts, vans, trucks, all-terrain vehicles, motorcycles, and armored vehicles.
|
198 |
+
|
199 |
+
Police cars may also contain fire extinguishers[110][111] or defibrillators.[112]
|
200 |
+
|
201 |
+
The advent of the police car, two-way radio, and telephone in the early 20th century transformed policing into a reactive strategy that focused on responding to calls for service.[113] With this transformation, police command and control became more centralized.
|
202 |
+
|
203 |
+
In the United States, August Vollmer introduced other reforms, including education requirements for police officers.[114] O.W. Wilson, a student of Vollmer, helped reduce corruption and introduce professionalism in Wichita, Kansas, and later in the Chicago Police Department.[115] Strategies employed by O.W. Wilson included rotating officers from community to community to reduce their vulnerability to corruption, establishing of a non-partisan police board to help govern the police force, a strict merit system for promotions within the department, and an aggressive recruiting drive with higher police salaries to attract professionally qualified officers.[116] During the professionalism era of policing, law enforcement agencies concentrated on dealing with felonies and other serious crime and conducting visible car patrols in between, rather than broader focus on crime prevention.[117]
|
204 |
+
|
205 |
+
The Kansas City Preventive Patrol study in the early 1970s showed flaws in this strategy. It found that aimless car patrols did little to deter crime and often went unnoticed by the public. Patrol officers in cars had insufficient contact and interaction with the community, leading to a social rift between the two.[118] In the 1980s and 1990s, many law enforcement agencies began to adopt community policing strategies, and others adopted problem-oriented policing.
|
206 |
+
|
207 |
+
Broken windows' policing was another, related approach introduced in the 1980s by James Q. Wilson and George L. Kelling, who suggested that police should pay greater attention to minor "quality of life" offenses and disorderly conduct. The concept behind this method is simple: broken windows, graffiti, and other physical destruction or degradation of property create an environment in which crime and disorder is more likely. The presence of broken windows and graffiti sends a message that authorities do not care and are not trying to correct problems in these areas. Therefore, correcting these small problems prevents more serious criminal activity.[119] The theory was popularised in the early 1990s by police chief William J. Bratton and New York City Mayor Rudy Giuliani.
|
208 |
+
|
209 |
+
Building upon these earlier models, intelligence-led policing has also become an important strategy. Intelligence-led policing and problem-oriented policing are complementary strategies, both of which involve systematic use of information.[120] Although it still lacks a universally accepted definition, the crux of intelligence-led policing is an emphasis on the collection and analysis of information to guide police operations, rather than the reverse.[121]
|
210 |
+
|
211 |
+
A related development is evidence-based policing. In a similar vein to evidence-based policy, evidence-based policing is the use of controlled experiments to find which methods of policing are more effective. Leading advocates of evidence-based policing include the criminologist Lawrence W. Sherman and philanthropist Jerry Lee. Findings from controlled experiments include the Minneapolis Domestic Violence Experiment,[122] evidence that patrols deter crime if they are concentrated in crime hotspots[123] and that restricting police powers to shoot suspects does not cause an increase in crime or violence against police officers.[124] Use of experiments to assess the usefulness of strategies has been endorsed by many police services and institutions, including the US Police Foundation and the UK College of Policing.
|
212 |
+
|
213 |
+
In many nations, criminal procedure law has been developed to regulate officers' discretion, so that they do not arbitrarily or unjustly exercise their powers of arrest, search and seizure, and use of force. In the United States, Miranda v. Arizona led to the widespread use of Miranda warnings or constitutional warnings.
|
214 |
+
|
215 |
+
In Miranda the court created safeguards against self-incriminating statements made after an arrest. The court held that "The prosecution may not use statements, whether exculpatory or inculpatory, stemming from questioning initiated by law enforcement officers after a person has been taken into custody or otherwise deprived of his freedom of action in any significant way, unless it demonstrates the use of procedural safeguards effective to secure the Fifth Amendment's privilege against self-incrimination"[125]
|
216 |
+
|
217 |
+
Police in the United States are also prohibited from holding criminal suspects for more than a reasonable amount of time (usually 24–48 hours) before arraignment, using torture, abuse or physical threats to extract confessions, using excessive force to effect an arrest, and searching suspects' bodies or their homes without a warrant obtained upon a showing of probable cause. The four exceptions to the constitutional requirement of a search warrant are:
|
218 |
+
|
219 |
+
In Terry v. Ohio (1968) the court divided seizure into two parts, the investigatory stop and arrest. The court further held that during an investigatory stop a police officer's search " [is] confined to what [is] minimally necessary to determine whether [a suspect] is armed, and the intrusion, which [is] made for the sole purpose of protecting himself and others nearby, [is] confined to ascertaining the presence of weapons" (U.S. Supreme Court). Before Terry, every police encounter constituted an arrest, giving the police officer the full range of search authority. Search authority during a Terry stop (investigatory stop) is limited to weapons only.[125]
|
220 |
+
|
221 |
+
Using deception for confessions is permitted, but not coercion. There are exceptions or exigent circumstances such as an articulated need to disarm a suspect or searching a suspect who has already been arrested (Search Incident to an Arrest). The Posse Comitatus Act severely restricts the use of the military for police activity, giving added importance to police SWAT units.
|
222 |
+
|
223 |
+
British police officers are governed by similar rules, such as those introduced to England and Wales under the Police and Criminal Evidence Act 1984 (PACE), but generally have greater powers. They may, for example, legally search any suspect who has been arrested, or their vehicles, home or business premises, without a warrant, and may seize anything they find in a search as evidence.
|
224 |
+
|
225 |
+
All police officers in the United Kingdom, whatever their actual rank, are 'constables' in terms of their legal position. This means that a newly appointed constable has the same arrest powers as a Chief Constable or Commissioner. However, certain higher ranks have additional powers to authorize certain aspects of police operations, such as a power to authorize a search of a suspect's house (section 18 PACE in England and Wales) by an officer of the rank of Inspector, or the power to authorize a suspect's detention beyond 24 hours by a Superintendent.
|
226 |
+
|
227 |
+
Police services commonly include units for investigating crimes committed by the police themselves. These units are typically called Inspectorate-General, or in the US, "internal affairs". In some countries separate organizations outside the police exist for such purposes, such as the British Independent Office for Police Conduct. However, due to American laws around Qualified Immunity, it has become increasingly difficult to investigate and charge police misconduct & crimes.[126]
|
228 |
+
|
229 |
+
Likewise, some state and local jurisdictions, for example, Springfield, Illinois[127] have similar outside review organizations. The Police Service of Northern Ireland is investigated by the Police Ombudsman for Northern Ireland, an external agency set up as a result of the Patten report into policing the province. In the Republic of Ireland the Garda Síochána is investigated by the Garda Síochána Ombudsman Commission, an independent commission that replaced the Garda Complaints Board in May 2007.
|
230 |
+
|
231 |
+
The Special Investigations Unit of Ontario, Canada, is one of only a few civilian agencies around the world responsible for investigating circumstances involving police and civilians that have resulted in a death, serious injury, or allegations of sexual assault. The agency has made allegations of insufficient cooperation from various police services hindering their investigations.[128]
|
232 |
+
|
233 |
+
In Hong Kong, any allegations of corruption within the police will be investigated by the Independent Commission Against Corruption and the Independent Police Complaints Council, two agencies which are independent of the police force.
|
234 |
+
|
235 |
+
Due to a long-term decline in public confidence for law enforcement in the United States, body cameras worn by police officers are under consideration.[129]
|
236 |
+
|
237 |
+
Police forces also find themselves under criticism for their use of force, particularly deadly force. Specifically, tension increases when a police officer of one ethnic group harms or kills a suspect of another one.[citation needed] In the United States, such events occasionally spark protests and accusations of racism against police and allegations that police departments practice racial profiling.
|
238 |
+
|
239 |
+
In the United States since the 1960s, concern over such issues has increasingly weighed upon law enforcement agencies, courts and legislatures at every level of government. Incidents such as the 1965 Watts Riots, the videotaped 1991 beating by Los Angeles Police officers of Rodney King, and the riot following their acquittal have been suggested by some people to be evidence that U.S. police are dangerously lacking in appropriate controls.
|
240 |
+
|
241 |
+
The fact that this trend has occurred contemporaneously with the rise of the civil rights movement, the "War on Drugs", and a precipitous rise in violent crime from the 1960s to the 1990s has made questions surrounding the role, administration and scope of police authority increasingly complicated.[citation needed]
|
242 |
+
|
243 |
+
Police departments and the local governments that oversee them in some jurisdictions have attempted to mitigate some of these issues through community outreach programs and community policing to make the police more accessible to the concerns of local communities, by working to increase hiring diversity, by updating training of police in their responsibilities to the community and under the law, and by increased oversight within the department or by civilian commissions.
|
244 |
+
|
245 |
+
In cases in which such measures have been lacking or absent, civil lawsuits have been brought by the United States Department of Justice against local law enforcement agencies, authorized under the 1994 Violent Crime Control and Law Enforcement Act. This has compelled local departments to make organizational changes, enter into consent decree settlements to adopt such measures, and submit to oversight by the Justice Department.[130]
|
246 |
+
|
247 |
+
In May 2020, a global movement to increase scrutiny on police violence and defund militarization efforts grew in popularity—starting in Minneapolis, Minnesota with the killing of George Floyd. Calls for full defunding of the police and abolition of the police gained larger support as more criticized systemic racism in policing.[131]
|
248 |
+
|
249 |
+
Since 1855, the Supreme Court of the United States has consistently ruled that law enforcement officers have no duty to protect any individual, despite the motto "protect and serve". Their duty is to enforce the law in general. The first such case was in 1855.[132] The most recent in 2005: Castle Rock v. Gonzales.[133]
|
250 |
+
|
251 |
+
In contrast, the police are entitled to protect private rights in some jurisdictions. To ensure that the police would not interfere in the regular competencies of the courts of law, some police acts require that the police may only interfere in such cases where protection from courts cannot be obtained in time, and where, without interference of the police, the realization of the private right would be impeded.[134] This would, for example, allow police to establish a restaurant guest's identity and forward it to the innkeeper in a case where the guest cannot pay the bill at nighttime because his wallet had just been stolen from the restaurant table.
|
252 |
+
|
253 |
+
In addition, there are federal law enforcement agencies in the United States whose mission includes providing protection for executives such as the president and accompanying family members, visiting foreign dignitaries, and other high-ranking individuals.[135][better source needed] Such agencies include the U.S. Secret Service and the U.S. Park Police.
|
254 |
+
|
255 |
+
Police forces are usually organized and funded by some level of government. The level of government responsible for policing varies from place to place, and may be at the national, regional or local level. Some countries have police forces that serve the same territory, with their jurisdiction depending on the type of crime or other circumstances. Other countries, such as Austria, Chile, Israel, New Zealand, the Philippines, South Africa and Sweden, have a single national police force.[136]
|
256 |
+
|
257 |
+
In some places with multiple national police forces, one common arrangement is to have a civilian police force and a paramilitary gendarmerie, such as the Police Nationale and National Gendarmerie in France.[82] The French policing system spread to other countries through the Napoleonic Wars[137] and the French colonial empire.[138][139] Another example is the Policía Nacional and Guardia Civil in Spain. In both France and Spain, the civilian force polices urban areas and the paramilitary force polices rural areas. Italy has a similar arrangement with the Polizia di Stato and Carabinieri, though their jurisdictions overlap more. Some countries have separate agencies for uniformed police and detectives, such as the Military Police and Civil Police in Brazil and the Carabineros and Investigations Police in Chile.
|
258 |
+
|
259 |
+
Other countries have sub-national police forces, but for the most part their jurisdictions do not overlap. In many countries, especially federations, there may be two or more tiers of police force, each serving different levels of government and enforcing different subsets of the law. In Australia and Germany, the majority of policing is carried out by state (i.e. provincial) police forces, which are supplemented by a federal police force. Though not a federation, the United Kingdom has a similar arrangement, where policing is primarily the responsibility of a regional police force and specialist units exist at the national level. In Canada, the Royal Canadian Mounted Police (RCMP) are the federal police, while municipalities can decide whether to run a local police service or to contract local policing duties to a larger one. Most urban areas have a local police service, while most rural areas contract it to the RCMP, or to the provincial police in Ontario and Quebec.
|
260 |
+
|
261 |
+
The United States has a highly decentralized and fragmented system of law enforcement, with over 17,000 state and local law enforcement agencies.[140] These agencies include local police, county law enforcement (often in the form of a sheriff's office, or county police), state police and federal law enforcement agencies. Federal agencies, such as the FBI, only have jurisdiction over federal crimes or those that involve more than one state. Other federal agencies have jurisdiction over a specific type of crime. Examples include the Federal Protective Service, which patrols and protects government buildings; the postal police, which protect postal buildings, vehicles and items; the Park Police, which protect national parks; and Amtrak Police, which patrol Amtrak stations and trains. There are also some government agencies that perform police functions in addition to other duties, such as the Coast Guard.
|
en/4699.html.txt
ADDED
@@ -0,0 +1,35 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
A police officer, also known as an officer, policeman, or a policewoman is a warranted law employee of a police force. In most countries, "police officer" is a generic term not specifying a particular rank. In some, the use of the rank "officer" is legally reserved for military personnel.
|
4 |
+
|
5 |
+
Police officers are generally charged with the apprehension of suspects and the prevention, detection, and reporting of crime, protection and assistance of the general public, and the maintenance of public order. Police officers may be sworn to an oath, and have the power to arrest people and detain them for a limited time, along with other duties and powers. Some officers are trained in special duties, such as counter-terrorism, surveillance, child protection, VIP protection, civil law enforcement, and investigation techniques into major crime including fraud, rape, murder, and drug trafficking. Although many police officers wear a corresponding uniform, some police officers are plain-clothed in order to pass themselves off as civilians. In most countries police officers are given exemptions from certain laws to perform their duties. For example an officer may use force if necessary to arrest or detain a person when it would ordinarily be assault. In some countries, officers can also break road rules to perform their duties.[1]
|
6 |
+
|
7 |
+
The word "police" comes from the Greek politeia, meaning government, which came to mean its civil administration. The more general term for the function is law enforcement officer or peace officer. A sheriff is typically the top police officer of a county, with that word coming from the person enforcing law over a shire. A person who has been deputized to serve the function of the sheriff is referred to as the deputy.
|
8 |
+
|
9 |
+
Police officers are those empowered by government to enforce the laws it creates. In The Federalist collection of articles and essays, James Madison wrote: "If men were angels, no Government would be necessary". These words apply to those who serve government, including police. A common nickname for a police officer is "cop"; derived from the verb sense "to arrest", itself derived from "to grab". Thus, "someone who captures", a "copper", was shortened to just "cop".[2] It may also find its origin in the Latin capere, brought to English via the Old French caper.[3]
|
10 |
+
|
11 |
+
Responsibilities of a police officer are varied, and may differ greatly from within one political context to another. Typical duties relate to keeping the peace, law enforcement, protection of people and property and the investigation of crimes. Officers are expected to respond to a variety of situations that may arise while they are on duty. Rules and guidelines dictate how an officer should behave within the community, and in many contexts, restrictions are placed on what the uniformed officer wears. In some countries, rules and procedures dictate that a police officer is obliged to intervene in a criminal incident, even if they are off-duty. Police officers in nearly all countries retain their lawful powers while off duty.[4]
|
12 |
+
|
13 |
+
In the majority of Western legal systems, the major role of the police is to maintain order, keeping the peace through surveillance of the public, and the subsequent reporting and apprehension of suspected violators of the law. They also function to discourage crimes through high-visibility policing, and most police forces have an investigative capability. Police have the legal authority to arrest and detain, usually granted by magistrates. Police officers also respond to emergency calls, along with routine community policing.
|
14 |
+
|
15 |
+
Police are often used as an emergency service and may provide a public safety function at large gatherings, as well as in emergencies, disasters, search and rescue situations, and road traffic collisions. To provide a prompt response in emergencies, the police often coordinate their operations with fire and emergency medical services. In some countries, individuals serve jointly as police officers as well as firefighters (creating the role of fire police). In many countries, there is a common emergency service number that allows the police, firefighters, or medical services to be summoned to an emergency. Some countries, such as the United Kingdom have outlined command procedures, for the use in major emergencies or disorder. The Gold Silver Bronze command structure is a system set up to improve communications between ground-based officers and the control room, typically, Bronze Commander would be a senior officer on the ground, coordinating the efforts in the center of the emergency, Silver Commanders would be positioned in an 'Incident Control Room' erected to improve better communications at the scene, and a Gold Commander who would be in the Control Room.
|
16 |
+
|
17 |
+
Police are also responsible for reprimanding minor offenders by issuing citations which typically may result in the imposition of fines, particularly for violations of traffic law. Traffic enforcement is often and effectively accomplished by police officers on motorcycles—called motor officers, these officers refer to the motorcycles they ride on duty as simply motors. Police are also trained to assist persons in distress, such as motorists whose car has broken down and people experiencing a medical emergency. Police are typically trained in basic first aid such as CPR.
|
18 |
+
|
19 |
+
Some park rangers are commissioned as law enforcement officers and carry out a law-enforcement role within national parks and other back-country wilderness and recreational areas, whereas Military police perform law enforcement functions within the military.
|
20 |
+
|
21 |
+
In most countries, candidates for the police force must have completed some formal education.[5] Increasing numbers of people are joining the police force who possess tertiary education [6] and in response to this many police forces have developed a "fast-track" scheme whereby those with university degrees spend two to three years as a Constable before receiving promotion to higher ranks, such as Sergeants or Inspectors. (Officers who work within investigative divisions or plainclothes are not necessarily of a higher rank but merely have different duties.)[citation needed] Police officers are also recruited from those with experience in the military or security services. In the United States state laws may codify statewide qualification standards regarding age, education, criminal record, and training but in other places requirements are set by local police agencies. Each local Police agency has different requirements.
|
22 |
+
|
23 |
+
Promotion is not automatic and usually requires the candidate to pass some kind of examination, interview board or other selection procedure. Although promotion normally includes an increase in salary, it also brings with it an increase in responsibility and for most, an increase in administrative paperwork. There is no stigma attached to this, as experienced line patrol officers are highly regarded.
|
24 |
+
|
25 |
+
Dependent upon each agency, but generally after completing two years of service, officers may apply for specialist positions, such as detective, police dog handler, mounted police officer, motorcycle officer, water police officer, or firearms officer (in countries where police are not routinely armed).
|
26 |
+
|
27 |
+
In some countries, including Singapore, police ranks are supplemented through conscription, similar to national service in the military. Qualifications may thus be relaxed or enhanced depending on the target mix of conscripts. Conscripts face tougher physical requirements in areas such as eyesight, but minimum academic qualification requirements are less stringent. Some join as volunteers, again via differing qualification requirements.
|
28 |
+
|
29 |
+
In some societies, police officers are paid relatively well compared to other occupations; their pay depends on what rank they are within their police force and how many years they have served.[7] In the United States, an average police officer's salary is between $53,561 and $64,581 in 2020.[8] In the United Kingdom for the year 2015–16 a police officer's average salary was £30,901.[citation needed]
|
30 |
+
|
31 |
+
There are numerous issues affecting the safety and health of police officers, including line of duty deaths and occupational stress. On August 6, 2019, New Jersey Attorney General Gurbir Grewal announced creation of the first U.S. statewide program to support the mental health of police officers. The goal of the program would be to train officers in emotional resiliency and to help destigmatize mental health issues.[9]
|
32 |
+
|
33 |
+
Almost universally, police officers are authorized the use of force, up to and including deadly force, when acting in a law enforcement capacity.[10] Although most law enforcement agencies follow some variant of the use of force continuum, where officers are only authorized the level of force required to match situational requirements, specific thresholds and responses vary between jurisdictions.[11] While officers are trained to avoid excessive use of force, and may be held legally accountable for infractions, the variability of law enforcement and its dependence on human judgment have made the subject an area of controversy and research.[12][13]
|
34 |
+
|
35 |
+
In the performance of their duties, police officers may act unlawfully, either deliberately or as a result of errors in judgment.[14] Police accountability efforts strive to protect citizens and their rights by ensuring legal and effective law enforcement conduct, while affording individual officers the required autonomy, protection, and discretion. As an example, the use of body-worn cameras has been shown to reduce both instances of misconduct and complaints against officers.[15]
|
en/47.html.txt
ADDED
@@ -0,0 +1,221 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
Adolescence (from Latin adolescere, meaning 'to grow up')[1] is a transitional stage of physical and psychological development that generally occurs during the period from puberty to legal adulthood (age of majority).[1][2][3] Adolescence is usually associated with the teenage years,[3][4][5][6] but its physical, psychological or cultural expressions may begin earlier and end later. For example, puberty now typically begins during preadolescence, particularly in females.[4][7][8][9][10] Physical growth (particularly in males) and cognitive development can extend into the early twenties. Thus, age provides only a rough marker of adolescence, and scholars have found it difficult to agree upon a precise definition of adolescence.[7][8][11][12]
|
6 |
+
|
7 |
+
A thorough understanding of adolescence in society depends on information from various perspectives, including psychology, biology, history, sociology, education, and anthropology. Within all of these perspectives, adolescence is viewed as a transitional period between childhood and adulthood, whose cultural purpose is the preparation of children for adult roles.[13] It is a period of multiple transitions involving education, training, employment, and unemployment, as well as transitions from one living circumstance to another.[14]
|
8 |
+
|
9 |
+
The end of adolescence and the beginning of adulthood varies by country. Furthermore, even within a single nation, state or culture, there can be different ages at which an individual is considered mature enough for society to entrust them with certain privileges and responsibilities. Such privileges and responsibilities include driving a vehicle, having legal sexual relations, serving in the armed forces or on a jury, purchasing and drinking alcohol, purchase of tobacco products, voting, entering into contracts, finishing certain levels of education, marriage, and accountability for upholding the law. Adolescence is usually accompanied by an increased independence allowed by the parents or legal guardians, including less supervision as compared to preadolescence.
|
10 |
+
|
11 |
+
In studying adolescent development,[15] adolescence can be defined biologically, as the physical transition marked by the onset of puberty and the termination of physical growth; cognitively, as changes in the ability to think abstractly and multi-dimensionally; or socially, as a period of preparation for adult roles. Major pubertal and biological changes include changes to the sex organs, height, weight, and muscle mass, as well as major changes in brain structure and organization. Cognitive advances encompass both increment in knowledge and in the ability to think abstractly and to reason more effectively. The study of adolescent development often involves interdisciplinary collaborations. For example, researchers in neuroscience or bio-behavioral health might focus on pubertal changes in brain structure and its effects on cognition or social relations. Sociologists interested in adolescence might focus on the acquisition of social roles (e.g., worker or romantic partner) and how this varies across cultures or social conditions.[16] Developmental psychologists might focus on changes in relations with parents and peers as a function of school structure and pubertal status.[17] Some scientists have questioned the universality of adolescence as a developmental phase, arguing that traits often considered typical of adolescents are not in fact inherent to the teenage years.
|
12 |
+
|
13 |
+
Puberty is a period of several years in which rapid physical growth and psychological changes occur, culminating in sexual maturity. The average age of onset of puberty is at 11 for girls and 12 for boys.[18][19] Every person's individual timetable for puberty is influenced primarily by heredity, although environmental factors, such as diet and exercise, also exert some influences.[20][21] These factors can also contribute to precocious and delayed puberty.[12][21]
|
14 |
+
|
15 |
+
Some of the most significant parts of pubertal development involve distinctive physiological changes in individuals' height, weight, body composition, and circulatory and respiratory systems.[22] These changes are largely influenced by hormonal activity. Hormones play an organizational role, priming the body to behave in a certain way once puberty begins,[23] and an active role, referring to changes in hormones during adolescence that trigger behavioral and physical changes.[24]
|
16 |
+
|
17 |
+
Puberty occurs through a long process and begins with a surge in hormone production, which in turn causes a number of physical changes. It is the stage of life characterized by the appearance and development of secondary sex characteristics (for example, a deeper voice and larger Adam's apple in boys, and development of breasts and more curved and prominent hips in girls) and a strong shift in hormonal balance towards an adult state. This is triggered by the pituitary gland, which secretes a surge of hormonal agents into the blood stream, initiating a chain reaction. The male and female gonads are thereby activated, which puts them into a state of rapid growth and development; the triggered gonads now commence mass production of hormones. The testes primarily release testosterone, and the ovaries predominantly dispense estrogen. The production of these hormones increases gradually until sexual maturation is met. Some boys may develop gynecomastia due to an imbalance of sex hormones, tissue responsiveness or obesity.[25]
|
18 |
+
|
19 |
+
Facial hair in males normally appears in a specific order during puberty: The first facial hair to appear tends to grow at the corners of the upper lip, typically between 14 and 17 years of age.[26][27] It then spreads to form a moustache over the entire upper lip. This is followed by the appearance of hair on the upper part of the cheeks, and the area under the lower lip.[26] The hair eventually spreads to the sides and lower border of the chin, and the rest of the lower face to form a full beard.[26] As with most human biological processes, this specific order may vary among some individuals. Facial hair is often present in late adolescence, around ages 17 and 18, but may not appear until significantly later.[27][28] Some men do not develop full facial hair for 10 years after puberty.[27] Facial hair continues to get coarser, darker and thicker for another 2–4 years after puberty.[27]
|
20 |
+
|
21 |
+
The major landmark of puberty for males is spermarche, the first ejaculation, which occurs, on average, at age 13.[29] For females, it is menarche, the onset of menstruation, which occurs, on average, between ages 12 and 13.[20][30][31][32] The age of menarche is influenced by heredity, but a girl's diet and lifestyle contribute as well.[20] Regardless of genes, a girl must have a certain proportion of body fat to attain menarche.[20] Consequently, girls who have a high-fat diet and who are not physically active begin menstruating earlier, on average, than girls whose diet contains less fat and whose activities involve fat reducing exercise (e.g. ballet and gymnastics).[20][21] Girls who experience malnutrition or are in societies in which children are expected to perform physical labor also begin menstruating at later ages.[20]
|
22 |
+
|
23 |
+
The timing of puberty can have important psychological and social consequences. Early maturing boys are usually taller and stronger than their friends.[33] They have the advantage in capturing the attention of potential partners and in being picked first for sports. Pubescent boys often tend to have a good body image, are more confident, secure, and more independent.[34] Late maturing boys can be less confident because of poor body image when comparing themselves to already developed friends and peers. However, early puberty is not always positive for boys; early sexual maturation in boys can be accompanied by increased aggressiveness due to the surge of hormones that affect them.[34] Because they appear older than their peers, pubescent boys may face increased social pressure to conform to adult norms; society may view them as more emotionally advanced, despite the fact that their cognitive and social development may lag behind their appearance.[34] Studies have shown that early maturing boys are more likely to be sexually active and are more likely to participate in risky behaviors.[35]
|
24 |
+
|
25 |
+
For girls, early maturation can sometimes lead to increased self-consciousness, a typical aspect in maturing females.[36] Because of their bodies' developing in advance, pubescent girls can become more insecure and dependent.[36] Consequently, girls that reach sexual maturation early are more likely than their peers to develop eating disorders (such as anorexia nervosa). Nearly half of all American high school girls' diets are to lose weight.[36] In addition, girls may have to deal with sexual advances from older boys before they are emotionally and mentally mature.[37] In addition to having earlier sexual experiences and more unwanted pregnancies than late maturing girls, early maturing girls are more exposed to alcohol and drug abuse.[38] Those who have had such experiences tend to not perform as well in school as their "inexperienced" peers.[39]
|
26 |
+
|
27 |
+
Girls have usually reached full physical development around ages 15–17,[3][19][40] while boys usually complete puberty around ages 16–17.[19][40][41] Any increase in height beyond the post-pubertal age is uncommon. Girls attain reproductive maturity about four years after the first physical changes of puberty appear.[3] In contrast, boys develop more slowly but continue to grow for about six years after the first visible pubertal changes.[34][41]
|
28 |
+
|
29 |
+
The adolescent growth spurt is a rapid increase in the individual's height and weight during puberty resulting from the simultaneous release of growth hormones, thyroid hormones, and androgens.[42] Males experience their growth spurt about two years later, on average, than females. During their peak height velocity (the time of most rapid growth), adolescents grow at a growth rate nearly identical to that of a toddler—about 10.3 cm (4 inches) per year for males and 9 cm (3.5 inches) per year for females.[43] In addition to changes in height, adolescents also experience a significant increase in weight (Marshall, 1978). The weight gained during adolescence constitutes nearly half of one's adult body weight.[43] Teenage and early adult males may continue to gain natural muscle growth even after puberty.[34]
|
30 |
+
|
31 |
+
The accelerated growth in different body parts happens at different times, but for all adolescents, it has a fairly regular sequence. The first places to grow are the extremities—the head, hands and feet—followed by the arms and legs, then the torso and shoulders.[44] This non-uniform growth is one reason why an adolescent body may seem out of proportion.
|
32 |
+
|
33 |
+
During puberty, bones become harder and more brittle. At the conclusion of puberty, the ends of the long bones close during the process called epiphysis. There can be ethnic differences in these skeletal changes. For example, in the United States of America, bone density increases significantly more among black than white adolescents, which might account for decreased likelihood of black women developing osteoporosis and having fewer bone fractures there.[45]
|
34 |
+
|
35 |
+
Another set of significant physical changes during puberty happen in bodily distribution of fat and muscle. This process is different for females and males. Before puberty, there are nearly no sex differences in fat and muscle distribution; during puberty, boys grow muscle much faster than girls, although both sexes experience rapid muscle development. In contrast, though both sexes experience an increase in body fat, the increase is much more significant for girls. Frequently, the increase in fat for girls happens in their years just before puberty. The ratio between muscle and fat among post-pubertal boys is around three to one, while for girls it is about five to four. This may help explain sex differences in athletic performance.[46]
|
36 |
+
|
37 |
+
Pubertal development also affects circulatory and respiratory systems as an adolescents' heart and lungs increase in both size and capacity. These changes lead to increased strength and tolerance for exercise. Sex differences are apparent as males tend to develop "larger hearts and lungs, higher systolic blood pressure, a lower resting heart rate, a greater capacity for carrying oxygen to the blood, a greater power for neutralizing the chemical products of muscular exercise, higher blood hemoglobin and more red blood cells".[47]
|
38 |
+
|
39 |
+
Despite some genetic sex differences, environmental factors play a large role in biological changes during adolescence. For example, girls tend to reduce their physical activity in preadolescence[48][49] and may receive inadequate nutrition from diets that often lack important nutrients, such as iron.[50] These environmental influences, in turn, affect female physical development.
|
40 |
+
|
41 |
+
Primary sex characteristics are those directly related to the sex organs. In males, the first stages of puberty involve growth of the testes and scrotum, followed by growth of the penis.[51] At the time that the penis develops, the seminal vesicles, the prostate, and the bulbourethral gland also enlarge and develop. The first ejaculation of seminal fluid generally occurs about one year after the beginning of accelerated penis growth, although this is often determined culturally rather than biologically, since for many boys first ejaculation occurs as a result of masturbation.[44] Boys are generally fertile before they have an adult appearance.[42]
|
42 |
+
|
43 |
+
In females, changes in the primary sex characteristics involve growth of the uterus, vagina, and other aspects of the reproductive system. Menarche, the beginning of menstruation, is a relatively late development which follows a long series of hormonal changes.[52] Generally, a girl is not fully fertile until several years after menarche, as regular ovulation follows menarche by about two years.[53] Unlike males, therefore, females usually appear physically mature before they are capable of becoming pregnant.
|
44 |
+
|
45 |
+
Changes in secondary sex characteristics include every change that is not directly related to sexual reproduction. In males, these changes involve appearance of pubic, facial, and body hair, deepening of the voice, roughening of the skin around the upper arms and thighs, and increased development of the sweat glands. In females, secondary sex changes involve elevation of the breasts, widening of the hips, development of pubic and underarm hair, widening of the areolae, and elevation of the nipples.[54] The changes in secondary sex characteristics that take place during puberty are often referred to in terms of five Tanner stages,[55] named after the British pediatrician who devised the categorization system.
|
46 |
+
|
47 |
+
The human brain is not fully developed by the time a person reaches puberty. Between the ages of 10 and 25, the brain undergoes changes that have important implications for behavior (see Cognitive development below). The brain reaches 90% of its adult size by the time a person is six years of age.[56] Thus, the brain does not grow in size much during adolescence. However, the folding in the brain continues to become more complex until the late teens. The biggest changes in the folds during this time occur in the parts of the cortex that process cognitive and emotional information.[56]
|
48 |
+
|
49 |
+
Over the course of adolescence, the amount of white matter in the brain increases linearly, while the amount of grey matter in the brain follows an inverted-U pattern.[57] Through a process called synaptic pruning, unnecessary neuronal connections in the brain are eliminated and the amount of grey matter is pared down. However, this does not mean that the brain loses functionality; rather, it becomes more efficient due to increased myelination (insulation of axons) and the reduction of unused pathways.[58]
|
50 |
+
|
51 |
+
The first areas of the brain to be pruned are those involving primary functions, such as motor and sensory areas. The areas of the brain involved in more complex processes lose matter later in development. These include the lateral and prefrontal cortices, among other regions.[59] Some of the most developmentally significant changes in the brain occur in the prefrontal cortex, which is involved in decision making and cognitive control, as well as other higher cognitive functions. During adolescence, myelination and synaptic pruning in the prefrontal cortex increases, improving the efficiency of information processing, and neural connections between the prefrontal cortex and other regions of the brain are strengthened.[60] This leads to better evaluation of risks and rewards, as well as improved control over impulses. Specifically, developments in the dorsolateral prefrontal cortex are important for controlling impulses and planning ahead, while development in the ventromedial prefrontal cortex is important for decision making. Changes in the orbitofrontal cortex are important for evaluating rewards and risks.
|
52 |
+
|
53 |
+
Three neurotransmitters that play important roles in adolescent brain development are glutamate, dopamine and serotonin. Glutamate is an excitatory neurotransmitter. During the synaptic pruning that occurs during adolescence, most of the neural connections that are pruned contain receptors for glutamate or other excitatory neurotransmitters.[61] Because of this, by early adulthood the synaptic balance in the brain is more inhibitory than excitatory.
|
54 |
+
|
55 |
+
Dopamine is associated with pleasure and attuning to the environment during decision-making. During adolescence, dopamine levels in the limbic system increase and input of dopamine to the prefrontal cortex increases.[62] The balance of excitatory to inhibitory neurotransmitters and increased dopamine activity in adolescence may have implications for adolescent risk-taking and vulnerability to boredom (see Cognitive development below).
|
56 |
+
|
57 |
+
Serotonin is a neuromodulator involved in regulation of mood and behavior. Development in the limbic system plays an important role in determining rewards and punishments and processing emotional experience and social information. Changes in the levels of the neurotransmitters dopamine and serotonin in the limbic system make adolescents more emotional and more responsive to rewards and stress. The corresponding increase in emotional variability also can increase adolescents' vulnerability. The effect of serotonin is not limited to the limbic system: Several serotonin receptors have their gene expression change dramatically during adolescence, particularly in the human frontal and prefrontal cortex
|
58 |
+
.[63]
|
59 |
+
|
60 |
+
Adolescence is also a time for rapid cognitive development.[64] Piaget describes adolescence as the stage of life in which the individual's thoughts start taking more of an abstract form and the egocentric thoughts decrease. This allows the individual to think and reason in a wider perspective.[65] A combination of behavioural and fMRI studies have demonstrated development of executive functions, that is, cognitive skills that enable the control and coordination of thoughts and behaviour, which are generally associated with the prefrontal cortex.[66] The thoughts, ideas and concepts developed at this period of life greatly influence one's future life, playing a major role in character and personality formation.[67]
|
61 |
+
|
62 |
+
Biological changes in brain structure and connectivity within the brain interact with increased experience, knowledge, and changing social demands to produce rapid cognitive growth (see Changes in the brain above). The age at which particular changes take place varies between individuals, but the changes discussed below begin at puberty or shortly after that and some skills continue to develop as the adolescent ages. The dual systems model proposes a maturational imbalance between development of the socioemotional system and cognitive control systems in the brain that contribute to impulsivity and other behaviors characteristic of adolescence.[68]
|
63 |
+
|
64 |
+
There are at least two major approaches to understanding cognitive change during adolescence. One is the constructivist view of cognitive development. Based on the work of Piaget, it takes a quantitative, state-theory approach, hypothesizing that adolescents' cognitive improvement is relatively sudden and drastic. The second is the information-processing perspective, which derives from the study of artificial intelligence and attempts to explain cognitive development in terms of the growth of specific components of the thinking process.
|
65 |
+
|
66 |
+
By the time individuals have reached age 15 or so, their basic thinking abilities are comparable to those of adults. These improvements occur in five areas during adolescence:
|
67 |
+
|
68 |
+
Studies since 2005 indicate that the brain is not fully formed until the early twenties.[74]
|
69 |
+
|
70 |
+
Adolescents' thinking is less bound to concrete events than that of children: they can contemplate possibilities outside the realm of what currently exists. One manifestation of the adolescent's increased facility with thinking about possibilities is the improvement of skill in deductive reasoning, which leads to the development of hypothetical thinking. This provides the ability to plan ahead, see the future consequences of an action and to provide alternative explanations of events. It also makes adolescents more skilled debaters, as they can reason against a friend's or parent's assumptions. Adolescents also develop a more sophisticated understanding of probability.
|
71 |
+
|
72 |
+
The appearance of more systematic, abstract thinking is another notable aspect of cognitive development during adolescence. For example, adolescents find it easier than children to comprehend the sorts of higher-order abstract logic inherent in puns, proverbs, metaphors, and analogies. Their increased facility permits them to appreciate the ways in which language can be used to convey multiple messages, such as satire, metaphor, and sarcasm. (Children younger than age nine often cannot comprehend sarcasm at all.)[75] This also permits the application of advanced reasoning and logical processes to social and ideological matters such as interpersonal relationships, politics, philosophy, religion, morality, friendship, faith, fairness, and honesty.
|
73 |
+
|
74 |
+
A third gain in cognitive ability involves thinking about thinking itself, a process referred to as metacognition. It often involves monitoring one's own cognitive activity during the thinking process. Adolescents' improvements in knowledge of their own thinking patterns lead to better self-control and more effective studying. It is also relevant in social cognition, resulting in increased introspection, self-consciousness, and intellectualization (in the sense of thought about one's own thoughts, rather than the Freudian definition as a defense mechanism). Adolescents are much better able than children to understand that people do not have complete control over their mental activity. Being able to introspect may lead to two forms of adolescent egocentrism, which results in two distinct problems in thinking: the imaginary audience and the personal fable. These likely peak at age fifteen, along with self-consciousness in general.[76]
|
75 |
+
|
76 |
+
Related to metacognition and abstract thought, perspective-taking involves a more sophisticated theory of mind.[77] Adolescents reach a stage of social perspective-taking in which they can understand how the thoughts or actions of one person can influence those of another person, even if they personally are not involved.[78]
|
77 |
+
|
78 |
+
Compared to children, adolescents are more likely to question others' assertions, and less likely to accept facts as absolute truths. Through experience outside the family circle, they learn that rules they were taught as absolute are in fact relativistic. They begin to differentiate between rules instituted out of common sense—not touching a hot stove—and those that are based on culturally relative standards (codes of etiquette, not dating until a certain age), a delineation that younger children do not make. This can lead to a period of questioning authority in all domains.[79]
|
79 |
+
|
80 |
+
Wisdom, or the capacity for insight and judgment that is developed through experience,[80] increases between the ages of fourteen and twenty-five, then levels off. Thus, it is during the adolescence–adulthood transition that individuals acquire the type of wisdom that is associated with age. Wisdom is not the same as intelligence: adolescents do not improve substantially on IQ tests since their scores are relative to others in their same age group, and relative standing usually does not change—everyone matures at approximately the same rate in this way.
|
81 |
+
|
82 |
+
Because most injuries sustained by adolescents are related to risky behavior (alcohol consumption and drug use, reckless or distracted driving, unprotected sex), a great deal of research has been done on the cognitive and emotional processes underlying adolescent risk-taking. In addressing this question, it is important to distinguish whether adolescents are more likely to engage in risky behaviors (prevalence), whether they make risk-related decisions similarly or differently than adults (cognitive processing perspective), or whether they use the same processes but value different things and thus arrive at different conclusions.
|
83 |
+
|
84 |
+
The behavioral decision-making theory proposes that adolescents and adults both weigh the potential rewards and consequences of an action. However, research has shown that adolescents seem to give more weight to rewards, particularly social rewards, than do adults.[81]
|
85 |
+
|
86 |
+
Research seems to favor the hypothesis that adolescents and adults think about risk in similar ways, but hold different values and thus come to different conclusions. Some have argued that there may be evolutionary benefits to an increased propensity for risk-taking in adolescence. For example, without a willingness to take risks, teenagers would not have the motivation or confidence necessary to leave their family of origin. In addition, from a population perspective, there is an advantage to having a group of individuals willing to take more risks and try new methods, counterbalancing the more conservative elements more typical of the received knowledge held by older adults.
|
87 |
+
|
88 |
+
Risk-taking may also have reproductive advantages: adolescents have a newfound priority in sexual attraction and dating, and risk-taking is required to impress potential mates. Research also indicates that baseline sensation seeking may affect risk-taking behavior throughout the lifespan.[82][83] Given the potential consequences, engaging in sexual behavior is somewhat risky, particularly for adolescents. Having unprotected sex, using poor birth control methods (e.g. withdrawal), having multiple sexual partners, and poor communication are some aspects of sexual behavior that increase individual and/or social risk.
|
89 |
+
|
90 |
+
Aspects of adolescents' lives that are correlated with risky sexual behavior include higher rates of parental abuse, and lower rates of parental support and monitoring.[84]
|
91 |
+
|
92 |
+
Related to their increased tendency for risk-taking, adolescents show impaired behavioral inhibition, including deficits in extinction learning.[85] This has important implications for engaging in risky behavior such as unsafe sex or illicit drug use, as adolescents are less likely to inhibit actions that may have negative outcomes in the future.[86] This phenomenon also has consequences for behavioral treatments based on the principle of extinction, such as cue exposure therapy for anxiety or drug addiction.[87][88] It has been suggested that impaired inhibition, specifically extinction, may help to explain adolescent propensity to relapse to drug-seeking even following behavioral treatment for addiction.[89]
|
93 |
+
|
94 |
+
The formal study of adolescent psychology began with the publication of G. Stanley Hall's "Adolescence in 1904". Hall, who was the first president of the American Psychological Association, viewed adolescence primarily as a time of internal turmoil and upheaval (sturm und drang). This understanding of youth was based on two then-new ways of understanding human behavior: Darwin's evolutionary theory and Freud's psychodynamic theory. He believed that adolescence was a representation of our human ancestors' phylogenetic shift from being primitive to being civilized. Hall's assertions stood relatively uncontested until the 1950s when psychologists such as Erik Erikson and Anna Freud started to formulate their theories about adolescence. Freud believed that the psychological disturbances associated with youth were biologically based and culturally universal while Erikson focused on the dichotomy between identity formation and role fulfillment.[90] Even with their different theories, these three psychologists agreed that adolescence was inherently a time of disturbance and psychological confusion. The less turbulent aspects of adolescence, such as peer relations and cultural influence, were left largely ignored until the 1980s. From the '50s until the '80s, the focus of the field was mainly on describing patterns of behavior as opposed to explaining them.[90]
|
95 |
+
|
96 |
+
Jean Macfarlane founded the University of California, Berkeley's Institute of Human Development, formerly called the Institute of Child Welfare, in 1927.[91] The Institute was instrumental in initiating studies of healthy development, in contrast to previous work that had been dominated by theories based on pathological personalities.[91] The studies looked at human development during the Great Depression and World War II, unique historical circumstances under which a generation of children grew up. The Oakland Growth Study, initiated by Harold Jones and Herbert Stolz in 1931, aimed to study the physical, intellectual, and social development of children in the Oakland area. Data collection began in 1932 and continued until 1981, allowing the researchers to gather longitudinal data on the individuals that extended past adolescence into adulthood. Jean Macfarlane launched the Berkeley Guidance Study, which examined the development of children in terms of their socioeconomic and family backgrounds.[92] These studies provided the background for Glen Elder in the 1960s to propose a life course perspective of adolescent development. Elder formulated several descriptive principles of adolescent development. The principle of historical time and place states that an individual's development is shaped by the period and location in which they grow up. The principle of the importance of timing in one's life refers to the different impact that life events have on development based on when in one's life they occur. The idea of linked lives states that one's development is shaped by the interconnected network of relationships of which one is a part and the principle of human agency asserts that one's life course is constructed via the choices and actions of an individual within the context of their historical period and social network.[93]
|
97 |
+
|
98 |
+
In 1984, the Society for Research on Adolescence (SRA) became the first official organization dedicated to the study of adolescent psychology. Some of the issues first addressed by this group include: the nature versus nurture debate as it pertains to adolescence; understanding the interactions between adolescents and their environment; and considering culture, social groups, and historical context when interpreting adolescent behavior.[90]
|
99 |
+
|
100 |
+
Evolutionary biologists like Jeremy Griffith have drawn parallels between adolescent psychology and the developmental evolution of modern humans from hominid ancestors as a manifestation of ontogeny recapitulating phylogeny.[94]
|
101 |
+
|
102 |
+
Identity development is a stage in the adolescent life cycle.[95] For most, the search for identity begins in the adolescent years. During these years, adolescents are more open to 'trying on' different behaviours and appearances to discover who they are.[96] In an attempt to find their identity and discover who they are, adolescents are likely to cycle through a number of identities to find one that suits them best. Developing and maintaining identity (in adolescent years) is a difficult task due to multiple factors such as family life, environment, and social status.[95] Empirical studies suggest that this process might be more accurately described as identity development, rather than formation, but confirms a normative process of change in both content and structure of one's thoughts about the self.[97] The two main aspects of identity development are self-clarity and self-esteem.[96] Since choices made during adolescent years can influence later life, high levels of self-awareness and self-control during mid-adolescence will lead to better decisions during the transition to adulthood.[98] Researchers have used three general approaches to understanding identity development: self-concept, sense of identity, and self-esteem. The years of adolescence create a more conscientious group of young adults. Adolescents pay close attention and give more time and effort to their appearance as their body goes through changes. Unlike children, teens put forth an effort to look presentable (1991).[4] The environment in which an adolescent grows up also plays an important role in their identity development. Studies done by the American Psychological Association have shown that adolescents with a less privileged upbringing have a more difficult time developing their identity.[99]
|
103 |
+
|
104 |
+
The idea of self-concept is known as the ability of a person to have opinions and beliefs that are defined confidently, consistent and stable.[100] Early in adolescence, cognitive developments result in greater self-awareness, greater awareness of others and their thoughts and judgments, the ability to think about abstract, future possibilities, and the ability to consider multiple possibilities at once. As a result, adolescents experience a significant shift from the simple, concrete, and global self-descriptions typical of young children; as children, they defined themselves by physical traits whereas adolescents define themselves based on their values, thoughts, and opinions.[101]
|
105 |
+
|
106 |
+
Adolescents can conceptualize multiple "possible selves" that they could become[102] and long-term possibilities and consequences of their choices.[103] Exploring these possibilities may result in abrupt changes in self-presentation as the adolescent chooses or rejects qualities and behaviors, trying to guide the actual self toward the ideal self (who the adolescent wishes to be) and away from the feared self (who the adolescent does not want to be). For many, these distinctions are uncomfortable, but they also appear to motivate achievement through behavior consistent with the ideal and distinct from the feared possible selves.[102][104]
|
107 |
+
|
108 |
+
Further distinctions in self-concept, called "differentiation," occur as the adolescent recognizes the contextual influences on their own behavior and the perceptions of others, and begin to qualify their traits when asked to describe themselves.[105] Differentiation appears fully developed by mid-adolescence.[106] Peaking in the 7th-9th grades, the personality traits adolescents use to describe themselves refer to specific contexts, and therefore may contradict one another. The recognition of inconsistent content in the self-concept is a common source of distress in these years (see Cognitive dissonance),[107] but this distress may benefit adolescents by encouraging structural development.
|
109 |
+
|
110 |
+
Egocentrism in adolescents forms a self-conscious desire to feel important in their peer groups and enjoy social acceptance.[108] Unlike the conflicting aspects of self-concept, identity represents a coherent sense of self stable across circumstances and including past experiences and future goals. Everyone has a self-concept, whereas Erik Erikson argued that not everyone fully achieves identity. Erikson's theory of stages of development includes the identity crisis in which adolescents must explore different possibilities and integrate different parts of themselves before committing to their beliefs. He described the resolution of this process as a stage of "identity achievement" but also stressed that the identity challenge "is never fully resolved once and for all at one point in time".[109] Adolescents begin by defining themselves based on their crowd membership. "Clothes help teens explore new identities, separate from parents, and bond with peers." Fashion has played a major role when it comes to teenagers "finding their selves"; Fashion is always evolving, which corresponds with the evolution of change in the personality of teenagers.[110] Adolescents attempt to define their identity by consciously styling themselves in different manners to find what best suits them. Trial and error in matching both their perceived image and the image others respond to and see, allows for the adolescent to grasp an understanding of who they are.[111]
|
111 |
+
|
112 |
+
Just as fashion is evolving to influence adolescents so is the media. "Modern life takes place amidst a never-ending barrage of flesh on screens, pages, and billboards."[112] This barrage consciously or subconsciously registers into the mind causing issues with self-image a factor that contributes to an adolescence sense of identity. Researcher James Marcia developed the current method for testing an individual's progress along these stages.[113][114] His questions are divided into three categories: occupation, ideology, and interpersonal relationships. Answers are scored based on the extent to which the individual has explored and the degree to which he has made commitments. The result is classification of the individual into a) identity diffusion in which all children begin, b) Identity Foreclosure in which commitments are made without the exploration of alternatives, c) Moratorium, or the process of exploration, or d) Identity Achievement in which Moratorium has occurred and resulted in commitments.[115]
|
113 |
+
|
114 |
+
Research since reveals self-examination beginning early in adolescence, but identity achievement rarely occurring before age 18.[116] The freshman year of college influences identity development significantly, but may actually prolong psychosocial moratorium by encouraging reexamination of previous commitments and further exploration of alternate possibilities without encouraging resolution.[117] For the most part, evidence has supported Erikson's stages: each correlates with the personality traits he originally predicted.[115] Studies also confirm the impermanence of the stages; there is no final endpoint in identity development.[118]
|
115 |
+
|
116 |
+
An adolescent's environment plays a huge role in their identity development.[99] While most adolescent studies are conducted on white, middle class children, studies show that the more privileged upbringing people have, the more successfully they develop their identity.[99] The forming of an adolescent's identity is a crucial time in their life. It has been recently found that demographic patterns suggest that the transition to adulthood is now occurring over a longer span of years than was the case during the middle of the 20th century. Accordingly, youth, a period that spans late adolescence and early adulthood, has become a more prominent stage of the life course. This, therefore, has caused various factors to become important during this development.[119] So many factors contribute to the developing social identity of an adolescent from commitment, to coping devices,[120] to social media. All of these factors are affected by the environment an adolescent grows up in. A child from a more privileged upbringing is exposed to more opportunities and better situations in general. An adolescent from an inner city or a crime-driven neighborhood is more likely to be exposed to an environment that can be detrimental to their development. Adolescence is a sensitive period in the development process, and exposure to the wrong things at that time can have a major effect on future decisions. While children that grow up in nice suburban communities are not exposed to bad environments they are more likely to participate in activities that can benefit their identity and contribute to a more successful identity development.[99]
|
117 |
+
|
118 |
+
Sexual orientation has been defined as "an erotic inclination toward people of one or more genders, most often described as sexual or erotic attractions".[121] In recent years, psychologists have sought to understand how sexual orientation develops during adolescence. Some theorists believe that there are many different possible developmental paths one could take, and that the specific path an individual follows may be determined by their sex, orientation, and when they reached the onset of puberty.[121]
|
119 |
+
|
120 |
+
In 1989, Troiden proposed a four-stage model for the development of homosexual sexual identity.[122] The first stage, known as sensitization, usually starts in childhood, and is marked by the child's becoming aware of same-sex attractions. The second stage, identity confusion, tends to occur a few years later. In this stage, the youth is overwhelmed by feelings of inner turmoil regarding their sexual orientation, and begins to engage in sexual experiences with same-sex partners. In the third stage of identity assumption, which usually takes place a few years after the adolescent has left home, adolescents begin to come out to their family and close friends, and assumes a self-definition as gay, lesbian, or bisexual.[123] In the final stage, known as commitment, the young adult adopts their sexual identity as a lifestyle. Therefore, this model estimates that the process of coming out begins in childhood, and continues through the early to mid 20s. This model has been contested, and alternate ideas have been explored in recent years.
|
121 |
+
|
122 |
+
In terms of sexual identity, adolescence is when most gay/lesbian and transgender adolescents begin to recognize and make sense of their feelings. Many adolescents may choose to come out during this period of their life once an identity has been formed; many others may go through a period of questioning or denial, which can include experimentation with both homosexual and heterosexual experiences.[124] A study of 194 lesbian, gay, and bisexual youths under the age of 21 found that having an awareness of one's sexual orientation occurred, on average, around age 10, but the process of coming out to peers and adults occurred around age 16 and 17, respectively.[125] Coming to terms with and creating a positive LGBT identity can be difficult for some youth for a variety of reasons. Peer pressure is a large factor when youth who are questioning their sexuality or gender identity are surrounded by heteronormative peers and can cause great distress due to a feeling of being different from everyone else. While coming out can also foster better psychological adjustment, the risks associated are real. Indeed, coming out in the midst of a heteronormative peer environment often comes with the risk of ostracism, hurtful jokes, and even violence.[124] Because of this, statistically the suicide rate amongst LGBT adolescents is up to four times higher than that of their heterosexual peers due to bullying and rejection from peers or family members.[126]
|
123 |
+
|
124 |
+
The final major aspect of identity formation is self-esteem. Self-esteem is defined as one's thoughts and feelings about one's self-concept and identity.[127] Most theories on self-esteem state that there is a grand desire, across all genders and ages, to maintain, protect and enhance their self-esteem.[100] Contrary to popular belief, there is no empirical evidence for a significant drop in self-esteem over the course of adolescence.[128] "Barometric self-esteem" fluctuates rapidly and can cause severe distress and anxiety, but baseline self-esteem remains highly stable across adolescence.[129] The validity of global self-esteem scales has been questioned, and many suggest that more specific scales might reveal more about the adolescent experience.[130]
|
125 |
+
Girls are most likely to enjoy high self-esteem when engaged in supportive relationships with friends, the most important function of friendship to them is having someone who can provide social and moral support. When they fail to win friends' approval or couldn't find someone with whom to share common activities and common interests, in these cases, girls suffer from low self-esteem. In contrast, boys are more concerned with establishing and asserting their independence and defining their relation to authority.[131] As such, they are more likely to derive high self-esteem from their ability to successfully influence their friends; on the other hand, the lack of romantic competence, for example, failure to win or maintain the affection of the opposite or same-sex (depending on sexual orientation), is the major contributor to low self-esteem in adolescent boys. Due to the fact that both men and women happen to have a low self-esteem after ending a romantic relationship, they are prone to other symptoms that is caused by this state. Depression and hopelessness are only two of the various symptoms and it is said that women are twice as likely to experience depression and men are three to four times more likely to commit suicide (Mearns, 1991; Ustun & Sartorius, 1995).[132]
|
126 |
+
|
127 |
+
The relationships adolescents have with their peers, family, and members of their social sphere play a vital role in the social development of an adolescent. As an adolescent's social sphere develops rapidly as they distinguish the differences between friends and acquaintances, they often become heavily emotionally invested in friends.[133] This is not harmful; however, if these friends expose an individual to potentially harmful situations, this is an aspect of peer pressure. Adolescence is a critical period in social development because adolescents can be easily influenced by the people they develop close relationships with. This is the first time individuals can truly make their own decisions, which also makes this a sensitive period. Relationships are vital in the social development of an adolescent due to the extreme influence peers can have over an individual. These relationships become significant because they begin to help the adolescent understand the concept of personalities, how they form and why a person has that specific type of personality. "The use of psychological comparisons could serve both as an index of the growth of an implicit personality theory and as a component process accounting for its creation. In other words, by comparing one person's personality characteristics to another's, we would be setting up the framework for creating a general theory of personality (and, ... such a theory would serve as a useful framework for coming to understand specific persons)."[134] This can be likened to the use of social comparison in developing one's identity and self-concept, which includes ones personality, and underscores the importance of communication, and thus relationships, in one's development. In social comparison we use reference groups, with respect to both psychological and identity development.[135] These reference groups are the peers of adolescents. This means that who the teen chooses/accepts as their friends and who they communicate with on a frequent basis often makes up their reference groups and can therefore have a huge impact on who they become. Research shows that relationships have the largest affect over the social development of an individual.
|
128 |
+
|
129 |
+
Adolescence marks a rapid change in one's role within a family. Young children tend to assert themselves forcefully, but are unable to demonstrate much influence over family decisions until early adolescence,[136] when they are increasingly viewed by parents as equals. The adolescent faces the task of increasing independence while preserving a caring relationship with his or her parents.[111] When children go through puberty, there is often a significant increase in parent–child conflict and a less cohesive familial bond. Arguments often concern minor issues of control, such as curfew, acceptable clothing, and the adolescent's right to privacy,[137][138] which adolescents may have previously viewed as issues over which their parents had complete authority.[139] Parent-adolescent disagreement also increases as friends demonstrate a greater impact on one another, new influences on the adolescent that may be in opposition to parents' values. Social media has also played an increasing role in adolescent and parent disagreements.[140] While parents never had to worry about the threats of social media in the past, it has become a dangerous place for children. While adolescents strive for their freedoms, the unknowns to parents of what their child is doing on social media sites is a challenging subject, due to the increasing amount of predators on social media sites. Many parents have very little knowledge of social networking sites in the first place and this further increases their mistrust. An important challenge for the parent–adolescent relationship is to understand how to enhance the opportunities of online communication while managing its risks.[100] Although conflicts between children and parents increase during adolescence, these are just relatively minor issues. Regarding their important life issues, most adolescents still share the same attitudes and values as their parents.[141]
|
130 |
+
|
131 |
+
During childhood, siblings are a source of conflict and frustration as well as a support system.[142] Adolescence may affect this relationship differently, depending on sibling gender. In same-sex sibling pairs, intimacy increases during early adolescence, then remains stable. Mixed-sex siblings pairs act differently; siblings drift apart during early adolescent years, but experience an increase in intimacy starting at middle adolescence.[143] Sibling interactions are children's first relational experiences, the ones that shape their social and self-understanding for life.[144] Sustaining positive sibling relations can assist adolescents in a number of ways. Siblings are able to act as peers, and may increase one another's sociability and feelings of self-worth. Older siblings can give guidance to younger siblings, although the impact of this can be either positive or negative depending on the activity of the older sibling.
|
132 |
+
|
133 |
+
A potential important influence on adolescence is change of the family dynamic, specifically divorce. With the divorce rate up to about 50%,[145] divorce is common and adds to the already great amount of change in adolescence. Custody disputes soon after a divorce often reflect a playing out of control battles and ambivalence between parents. Divorce usually results in less contact between the adolescent and their noncustodial parent.[146] In extreme cases of instability and abuse in homes, divorce can have a positive effect on families due to less conflict in the home. However, most research suggests a negative effect on adolescence as well as later development. A recent study found that, compared with peers who grow up in stable post-divorce families, children of divorce who experience additional family transitions during late adolescence, make less progress in their math and social studies performance over time.[147] Another recent study put forth a new theory entitled the adolescent epistemological trauma theory,[148] which posited that traumatic life events such as parental divorce during the formative period of late adolescence portend lifelong effects on adult conflict behavior that can be mitigated by effective behavioral assessment and training.[148] A parental divorce during childhood or adolescence continues to have a negative effect when a person is in his or her twenties and early thirties. These negative effects include romantic relationships and conflict style, meaning as adults, they are more likely to use the styles of avoidance and competing in conflict management.[149]
|
134 |
+
|
135 |
+
Despite changing family roles during adolescence, the home environment and parents are still important for the behaviors and choices of adolescents.[150] Adolescents who have a good relationship with their parents are less likely to engage in various risk behaviors, such as smoking, drinking, fighting, and/or unprotected sexual intercourse.[150]
|
136 |
+
In addition, parents influence the education of adolescence. A study conducted by Adalbjarnardottir and Blondal (2009) showed that adolescents at the age of 14 who identify their parents as authoritative figures are more likely to complete secondary education by the age of 22—as support and encouragement from an authoritative parent motivates the adolescence to complete schooling to avoid disappointing that parent.[151]
|
137 |
+
|
138 |
+
Peer groups are essential to social and general development. Communication with peers increases significantly during adolescence and peer relationships become more intense than in other stages[152] and more influential to the teen, affecting both the decisions and choices being made.[153] High quality friendships may enhance children's development regardless of the characteristics of those friends. As children begin to bond with various people and create friendships, it later helps them when they are adolescent and sets up the framework for adolescence and peer groups.[154]
|
139 |
+
Peer groups are especially important during adolescence, a period of development characterized by a dramatic increase in time spent with peers[155] and a decrease in adult supervision.[156] Adolescents also associate with friends of the opposite sex much more than in childhood[157] and tend to identify with larger groups of peers based on shared characteristics.[158] It is also common for adolescents to use friends as coping devices in different situations.[159] A three-factor structure of dealing with friends including avoidance, mastery, and nonchalance has shown that adolescents use friends as coping devices with social stresses.
|
140 |
+
|
141 |
+
Communication within peer groups allows adolescents to explore their feelings and identity as well as develop and evaluate their social skills. Peer groups offer members the opportunity to develop social skills such as empathy, sharing, and leadership. Adolescents choose peer groups based on characteristics similarly found in themselves.[111] By utilizing these relationships, adolescents become more accepting of who they are becoming. Group norms and values are incorporated into an adolescent's own self-concept.[153] Through developing new communication skills and reflecting upon those of their peers, as well as self-opinions and values, an adolescent can share and express emotions and other concerns without fear of rejection or judgment. Peer groups can have positive influences on an individual, such as on academic motivation and performance. However, while peers may facilitate social development for one another they may also hinder it. Peers can have negative influences, such as encouraging experimentation with drugs, drinking, vandalism, and stealing through peer pressure.[160] Susceptibility to peer pressure increases during early adolescence, peaks around age 14, and declines thereafter.[161] Further evidence of peers hindering social development has been found in Spanish teenagers, where emotional (rather than solution-based) reactions to problems and emotional instability have been linked with physical aggression against peers.[162] Both physical and relational aggression are linked to a vast number of enduring psychological difficulties, especially depression, as is social rejection.[163] Because of this, bullied adolescents often develop problems that lead to further victimization.[164] Bullied adolescents are more likely to both continue to be bullied and to bully others in the future.[165] However, this relationship is less stable in cases of cyberbullying, a relatively new issue among adolescents.
|
142 |
+
|
143 |
+
Adolescents tend to associate with "cliques" on a small scale and "crowds" on a larger scale. During early adolescence, adolescents often associate in cliques, exclusive, single-sex groups of peers with whom they are particularly close. Despite the common[according to whom?] notion that cliques are an inherently negative influence, they may help adolescents become socially acclimated and form a stronger sense of identity. Within a clique of highly athletic male-peers, for example, the clique may create a stronger sense of fidelity and competition. Cliques also have become somewhat a "collective parent", i.e. telling the adolescents what to do and not to do.[166] Towards late adolescence, cliques often merge into mixed-sex groups as teenagers begin romantically engaging with one another.[167] These small friend groups then break down further as socialization becomes more couple-oriented. On a larger scale, adolescents often associate with crowds, groups of individuals who share a common interest or activity. Often, crowd identities may be the basis for stereotyping young people, such as jocks or nerds. In large, multi-ethnic high schools, there are often ethnically determined crowds.[168] Adolescents use online technology to experiment with emerging identities and to broaden their peer groups, such as increasing the amount of friends acquired on Facebook and other social media sites.[153] Some adolescents use these newer channels to enhance relationships with peers however there can be negative uses as well such as cyberbullying, as mentioned previously, and negative impacts on the family.[169]
|
144 |
+
|
145 |
+
Romantic relationships tend to increase in prevalence throughout adolescence. By age 15, 53% of adolescents have had a romantic relationship that lasted at least one month over the course of the previous 18 months.[170] In a 2008 study conducted by YouGov for Channel 4, 20% of 14−17-year-olds surveyed revealed that they had their first sexual experience at 13 or under in the United Kingdom.[171] A 2002 American study found that those aged 15–44 reported that the average age of first sexual intercourse was 17.0 for males and 17.3 for females.[172] The typical duration of relationships increases throughout the teenage years as well. This constant increase in the likelihood of a long-term relationship can be explained by sexual maturation and the development of cognitive skills necessary to maintain a romantic bond (e.g. caregiving, appropriate attachment), although these skills are not strongly developed until late adolescence.[173] Long-term relationships allow adolescents to gain the skills necessary for high-quality relationships later in life[174] and develop feelings of self-worth. Overall, positive romantic relationships among adolescents can result in long-term benefits. High-quality romantic relationships are associated with higher commitment in early adulthood[175] and are positively associated with self-esteem, self-confidence, and social competence.[176][177] For example, an adolescent with positive self-confidence is likely to consider themselves a more successful partner, whereas negative experiences may lead to low confidence as a romantic partner.[178] Adolescents often date within their demographic in regards to race, ethnicity, popularity, and physical attractiveness.[179] However, there are traits in which certain individuals, particularly adolescent girls, seek diversity. While most adolescents date people approximately their own age, boys typically date partners the same age or younger; girls typically date partners the same age or older.[170]
|
146 |
+
|
147 |
+
Some researchers are now focusing on learning about how adolescents view their own relationships and sexuality; they want to move away from a research point of view that focuses on the problems associated with adolescent sexuality.[why?] College Professor Lucia O'Sullivan and her colleagues found that there were no significant gender differences in the relationship events adolescent boys and girls from grades 7–12 reported.[180] Most teens said they had kissed their partners, held hands with them, thought of themselves as being a couple and told people they were in a relationship. This means that private thoughts about the relationship as well as public recognition of the relationship were both important to the adolescents in the sample. Sexual events (such as sexual touching, sexual intercourse) were less common than romantic events (holding hands) and social events (being with one's partner in a group setting). The researchers state that these results are important because the results focus on the more positive aspects of adolescents and their social and romantic interactions rather than focusing on sexual behavior and its consequences.[180]
|
148 |
+
|
149 |
+
Adolescence marks a time of sexual maturation, which manifests in social interactions as well. While adolescents may engage in casual sexual encounters (often referred to as hookups), most sexual experience during this period of development takes place within romantic relationships.[181] Adolescents can use technologies and social media to seek out romantic relationships as they feel it is a safe place to try out dating and identity exploration. From these social media encounters, a further relationship may begin.[153] Kissing, hand holding, and hugging signify satisfaction and commitment. Among young adolescents, "heavy" sexual activity, marked by genital stimulation, is often associated with violence, depression, and poor relationship quality.[182][183] This effect does not hold true for sexual activity in late adolescence that takes place within a romantic relationship.[184] Some research suggest that there are genetic causes of early sexual activity that are also risk factors for delinquency, suggesting that there is a group who are at risk for both early sexual activity and emotional distress. For older adolescents, though, sexual activity in the context of romantic relationships was actually correlated with lower levels of deviant behavior after controlling for genetic risks, as opposed to sex outside of a relationship (hook-ups).[185]
|
150 |
+
|
151 |
+
Dating violence is fairly prevalent within adolescent relationships. When surveyed, 10-45% of adolescents reported having experienced physical violence in the context of a relationship while a quarter to a third of adolescents reported having experiencing psychological aggression. This reported aggression includes hitting, throwing things, or slaps, although most of this physical aggression does not result in a medical visit. Physical aggression in relationships tends to decline from high school through college and young adulthood. In heterosexual couples, there is no significant difference between the rates of male and female aggressors, unlike in adult relationships.[186][187][188]
|
152 |
+
|
153 |
+
Adolescent girls with male partners who are older than them are at higher risk for adverse sexual health outcomes than their peers. Research suggests that the larger the partner age difference, the less relationship power the girls experience. Behavioral interventions such as developing relationship skills in identifying, preventing, and coping with controlling behaviors may be beneficial. For condom use promotion, it is important to identify decision-making patterns within relationships and increase the power of the adolescent female in the relationship.[189] Female adolescents from minority populations are at even higher risk for intimate partner violence (IPV). Recent research findings suggest that a substantial portion of young urban females are at high risk for being victims of multiple forms of IPV. Practitioners diagnosing depression among urban minority teens should assess for both physical and non-physical forms of IPV, and early detection can help to identify youths in need of intervention and care.[190][191] Similarly to adult victims, adolescent victims do not readily disclose abuse, and may seek out medical care for problems not directly related to incidences of IPV. Therefore, screening should be a routine part of medical treatment for adolescents regardless of chief complaint. Many adults discount instances of IPV in adolescents or believe they do not occur because relationships at young ages are viewed as “puppy love,” however, it is crucial that adults take IPV in adolescents seriously even though often policy falls behind.[192]
|
154 |
+
|
155 |
+
In contemporary society, adolescents also face some risks as their sexuality begins to transform. While some of these, such as emotional distress (fear of abuse or exploitation) and sexually transmitted infections/diseases (STIs/STDs), including HIV/AIDS, are not necessarily inherent to adolescence, others such as teenage pregnancy (through non-use or failure of contraceptives) are seen as social problems in most western societies. One in four sexually active teenagers will contract an STI.[193] Adolescents in the United States often chose "anything but intercourse" for sexual activity because they mistakenly believe it reduces the risk of STIs. Across the country, clinicians report rising diagnoses of herpes and human papillomavirus (HPV), which can cause genital warts, and is now thought to affect 15 percent of the teen population. Girls 15 to 19 have higher rates of gonorrhea than any other age group. One-quarter of all new HIV cases occur in those under the age of 21.[193] Multrine also states in her article that according to a March survey by the Kaiser Family Foundation, eighty-one percent of parents want schools to discuss the use of condoms and contraception with their children. They also believe students should be able to be tested for STIs. Furthermore, teachers want to address such topics with their students. But, although 9 in 10 sex education instructors across the country believe that students should be taught about contraceptives in school, over one quarter report receiving explicit instructions from school boards and administrators not to do so. According to anthropologist Margaret Mead, the turmoil found in adolescence in Western society has a cultural rather than a physical cause; they reported that societies where young women engaged in free sexual activity had no such adolescent turmoil.
|
156 |
+
|
157 |
+
There are certain characteristics of adolescent development that are more rooted in culture than in human biology or cognitive structures. Culture has been defined as the "symbolic and behavioral inheritance received from the past that provides a community framework for what is valued".[194] Culture is learned and socially shared, and it affects all aspects of an individual's life.[195] Social responsibilities, sexual expression, and belief system development, for instance, are all things that are likely to vary by culture. Furthermore, distinguishing characteristics of youth, including dress, music and other uses of media, employment, art, food and beverage choices, recreation, and language, all constitute a youth culture.[195] For these reasons, culture is a prevalent and powerful presence in the lives of adolescents, and therefore we cannot fully understand today's adolescents without studying and understanding their culture.[195] However, "culture" should not be seen as synonymous with nation or ethnicity. Many cultures are present within any given country and racial or socioeconomic group. Furthermore, to avoid ethnocentrism, researchers must be careful not to define the culture's role in adolescence in terms of their own cultural beliefs.[196]
|
158 |
+
|
159 |
+
In Britain, teenagers first came to public attention during the Second World War, when there were fears of juvenile delinquency.[197] By the 1950s, the media presented teenagers in terms of generational rebellion. The exaggerated moral panic among politicians and the older generation was typically belied by the growth in intergenerational cooperation between parents and children. Many working-class parents, enjoying newfound economic security, eagerly took the opportunity to encourage their teens to enjoy more adventurous lives.[198] Schools were falsely portrayed as dangerous blackboard jungles under the control of rowdy kids.[199] The media distortions of the teens as too affluent, and as promiscuous, delinquent, counter-cultural rebels do not reflect the actual experiences of ordinary young adults, particularly young women.[200]
|
160 |
+
|
161 |
+
The degree to which adolescents are perceived as autonomous beings varies widely by culture, as do the behaviors that represent this emerging autonomy. Psychologists have identified three main types of autonomy: emotional independence, behavioral autonomy, and cognitive autonomy.[201] Emotional autonomy is defined in terms of an adolescent's relationships with others, and often includes the development of more mature emotional connections with adults and peers.[201] Behavioral autonomy encompasses an adolescent's developing ability to regulate his or her own behavior, to act on personal decisions, and to self-govern. Cultural differences are especially visible in this category because it concerns issues of dating, social time with peers, and time-management decisions.[201] Cognitive autonomy describes the capacity for an adolescent to partake in processes of independent reasoning and decision-making without excessive reliance on social validation.[201] Converging influences from adolescent cognitive development, expanding social relationships, an increasingly adultlike appearance, and the acceptance of more rights and responsibilities enhance feelings of autonomy for adolescents.[201] Proper development of autonomy has been tied to good mental health, high self-esteem, self-motivated tendencies, positive self-concepts, and self-initiating and regulating behaviors.[201] Furthermore, it has been found that adolescents' mental health is best when their feelings about autonomy match closely with those of their parents.[202]
|
162 |
+
|
163 |
+
A questionnaire called the teen timetable has been used to measure the age at which individuals believe adolescents should be able to engage in behaviors associated with autonomy.[203] This questionnaire has been used to gauge differences in cultural perceptions of adolescent autonomy, finding, for instance, that White parents and adolescents tend to expect autonomy earlier than those of Asian descent.[203] It is, therefore, clear that cultural differences exist in perceptions of adolescent autonomy, and such differences have implications for the lifestyles and development of adolescents. In sub-Saharan African youth, the notions of individuality and freedom may not be useful in understanding adolescent development. Rather, African notions of childhood and adolescent development are relational and interdependent.[204]
|
164 |
+
|
165 |
+
The lifestyle of an adolescent in a given culture is profoundly shaped by the roles and responsibilities he or she is expected to assume. The extent to which an adolescent is expected to share family responsibilities is one large determining factor in normative adolescent behavior. For instance, adolescents in certain cultures are expected to contribute significantly to household chores and responsibilities.[205] Household chores are frequently divided into self-care tasks and family-care tasks. However, specific household responsibilities for adolescents may vary by culture, family type, and adolescent age.[206] Some research has shown that adolescent participation in family work and routines has a positive influence on the development of an adolescent's feelings of self-worth, care, and concern for others.[205]
|
166 |
+
|
167 |
+
In addition to the sharing of household chores, certain cultures expect adolescents to share in their family's financial responsibilities. According to family economic and financial education specialists, adolescents develop sound money management skills through the practices of saving and spending money, as well as through planning ahead for future economic goals.[207] Differences between families in the distribution of financial responsibilities or provision of allowance may reflect various social background circumstances and intrafamilial processes, which are further influenced by cultural norms and values, as well as by the business sector and market economy of a given society.[208] For instance, in many developing countries it is common for children to attend fewer years of formal schooling so that, when they reach adolescence, they can begin working.[209]
|
168 |
+
|
169 |
+
While adolescence is a time frequently marked by participation in the workforce, the number of adolescents in the workforce is much lower now than in years past as a result of increased accessibility and perceived importance of formal higher education.[210] For example, half of all 16-year-olds in China were employed in 1980, whereas less than one fourth of this same cohort were employed in 1990.[210]
|
170 |
+
|
171 |
+
Furthermore, the amount of time adolescents spend on work and leisure activities varies greatly by culture as a result of cultural norms and expectations, as well as various socioeconomic factors. American teenagers spend less time in school or working and more time on leisure activities—which include playing sports, socializing, and caring for their appearance—than do adolescents in many other countries.[211] These differences may be influenced by cultural values of education and the amount of responsibility adolescents are expected to assume in their family or community.
|
172 |
+
|
173 |
+
Time management, financial roles, and social responsibilities of adolescents are therefore closely connected with the education sector and processes of career development for adolescents, as well as to cultural norms and social expectations. In many ways, adolescents' experiences with their assumed social roles and responsibilities determine the length and quality of their initial pathway into adult roles.[212]
|
174 |
+
|
175 |
+
Adolescence is frequently characterized by a transformation of an adolescent's understanding of the world, the rational direction towards a life course, and the active seeking of new ideas rather than the unquestioning acceptance of adult authority.[213] An adolescent begins to develop a unique belief system through his or her interaction with social, familial, and cultural environments.[214] While organized religion is not necessarily a part of every adolescent's life experience, youth are still held responsible for forming a set of beliefs about themselves, the world around them, and whatever higher powers they may or may not believe in.[213] This process is often accompanied or aided by cultural traditions that intend to provide a meaningful transition to adulthood through a ceremony, ritual, confirmation, or rite of passage.[215]
|
176 |
+
|
177 |
+
Many cultures define the transition into adultlike sexuality by specific biological or social milestones in an adolescent's life. For example, menarche (the first menstrual period of a female), or semenarche (the first ejaculation of a male) are frequent sexual defining points for many cultures. In addition to biological factors, an adolescent's sexual socialization is highly dependent upon whether their culture takes a restrictive or permissive attitude toward teen or premarital sexual activity. In the United States specifically, adolescents are said to have "raging hormones" that drive their sexual desires. These sexual desires are then dramatized regarding teen sex and seen as "a site of danger and risk; that such danger and risk is a source of profound worry among adults".[216] There is little to no normalization regarding teenagers having sex in the U.S., which causes conflict in how adolescents are taught about sex education. There is a constant debate about whether abstinence-only sex education or comprehensive sex education should be taught in schools and this stems back to whether or not the country it is being taught in is permissive or restrictive. Restrictive cultures overtly discourage sexual activity in unmarried adolescents or until an adolescent undergoes a formal rite of passage. These cultures may attempt to restrict sexual activity by separating males and females throughout their development, or through public shaming and physical punishment when sexual activity does occur.[167][217] In less restrictive cultures, there is more tolerance for displays of adolescent sexuality, or of the interaction between males and females in public and private spaces. Less restrictive cultures may tolerate some aspects of adolescent sexuality, while objecting to other aspects. For instance, some cultures find teenage sexual activity acceptable but teenage pregnancy highly undesirable. Other cultures do not object to teenage sexual activity or teenage pregnancy, as long as they occur after marriage.[218] In permissive societies, overt sexual behavior among unmarried teens is perceived as acceptable, and is sometimes even encouraged.[218] Regardless of whether a culture is restrictive or permissive, there are likely to be discrepancies in how females versus males are expected to express their sexuality. Cultures vary in how overt this double standard is—in some it is legally inscribed, while in others it is communicated through social convention.[219] Lesbian, gay, bisexual and transgender youth face much discrimination through bullying from those unlike them and may find telling others that they are gay to be a traumatic experience.[220] The range of sexual attitudes that a culture embraces could thus be seen to affect the beliefs, lifestyles, and societal perceptions of its adolescents.
|
178 |
+
|
179 |
+
Adolescence is a period frequently marked by increased rights and privileges for individuals. While cultural variation exists for legal rights and their corresponding ages, considerable consistency is found across cultures. Furthermore, since the advent of the Convention on the Rights of the Child in 1989 (children here defined as under 18), almost every country in the world (except the U.S. and South Sudan) has legally committed to advancing an anti-discriminatory stance towards young people of all ages. This includes protecting children against unchecked child labor, enrollment in the military, prostitution, and pornography.
|
180 |
+
In many societies, those who reach a certain age (often 18, though this varies) are considered to have reached the age of majority and are legally regarded as adults who are responsible for their actions. People below this age are considered minors or children. A person below the age of majority may gain adult rights through legal emancipation.
|
181 |
+
|
182 |
+
The legal working age in Western countries is usually 14 to 16, depending on the number of hours and type of employment under consideration. Many countries also specify a minimum school leaving age, at which a person is legally allowed to leave compulsory education. This age varies greatly cross-culturally, spanning from 10 to 18, which further reflects the diverse ways formal education is viewed in cultures around the world.
|
183 |
+
|
184 |
+
In most democratic countries, a citizen is eligible to vote at age 18. In a minority of countries, the voting age is as low as 16 (for example, Brazil), and at one time was as high as 25 in Uzbekistan.
|
185 |
+
|
186 |
+
The age of consent to sexual activity varies widely between jurisdictions, ranging from 12 to 20 years, as does the age at which people are allowed to marry.[221] Specific legal ages for adolescents that also vary by culture are enlisting in the military, gambling, and the purchase of alcohol, cigarettes or items with parental advisory labels.
|
187 |
+
The legal coming of age often does not correspond with the sudden realization of autonomy; many adolescents who have legally reached adult age are still dependent on their guardians or peers for emotional and financial support. Nonetheless, new legal privileges converge with shifting social expectations to usher in a phase of heightened independence or social responsibility for most legal adolescents.
|
188 |
+
|
189 |
+
Following a steady decline beginning in the late 1990s up through the mid-2000s and a moderate increase in the early 2010s, illicit drug use among adolescents has roughly plateaued in the U.S. Aside from alcohol, marijuana is the most commonly indulged drug habit during adolescent years. Data collected by the National Institute on Drug Abuse shows that between the years of 2015 and 2018, past year marijuana usage among 8th graders declined from 11.8% to 10.5%; among 10th grade students, usage rose from 25.4% to 27.50%; and among 12th graders, usage rose slightly from 34.9% to 35.9%.[222] Additionally, while the early 2010s saw a surge in the popularity of MDMA, usage has stabilized with 2.2% of 12th graders using MDMA in the past year in the U.S.[222] The heightened usage of ecstasy most likely ties in at least to some degree with the rising popularity of rave culture.
|
190 |
+
|
191 |
+
One significant contribution to the increase in teenage substance abuse is an increase in the availability of prescription medication. With an increase in the diagnosis of behavioral and attentional disorders for students, taking pharmaceutical drugs such as Vicodin and Adderall for pleasure has become a prevalent activity among adolescents: 9.9% of high school seniors report having abused prescription drugs within the past year.[222]
|
192 |
+
|
193 |
+
In the U.S., teenage alcohol use rose in the late 2000s and is currently stable at a moderate level. Out of a polled body of U.S. students age 12–18, 8.2% of 8th graders reported having been on at least one occasion having consumed alcohol within the previous month; for 10th graders, the number was 18.6%, and for 12th graders, 30.2%.[223] More drastically, cigarette smoking has become a far less prevalent activity among American middle- and high-school students; in fact, a greater number of teens now smoke marijuana than smoke cigarettes, with one recent study showing a respective 23.8% versus 43.6% of surveyed high school seniors.[223] Recent studies have shown that male late adolescents are far more likely to smoke cigarettes rather than females. The study indicated that there was a discernible gender difference in the prevalence of smoking among the students. The finding of the study shows that more males than females began smoking when they were in primary and high schools whereas most females started smoking after high school.[224] This may be attributed to recent changing social and political views towards marijuana; issues such as medicinal use and legalization have tended towards painting the drug in a more positive light than historically, while cigarettes continue to be vilified due to associated health risks.
|
194 |
+
|
195 |
+
Different drug habits often relate to one another in a highly significant manner. It has been demonstrated that adolescents who drink at least to some degree may be as much as sixteen times more likely than non-drinkers to experiment with illicit drugs.[225]
|
196 |
+
|
197 |
+
Peer acceptance and social norms gain a significantly greater hand in directing behavior at the onset of adolescence; as such, the alcohol and illegal drug habits of teens tend to be shaped largely by the substance use of friends and other classmates. In fact, studies suggest that more significantly than actual drug norms, an individual's perception of the illicit drug use by friends and peers is highly associated with his or her own habits in substance use during both middle and high school, a relationship that increases in strength over time.[226] Whereas social influences on alcohol use and marijuana use tend to work directly in the short term, peer and friend norms on smoking cigarettes in middle school have a profound effect on one's own likelihood to smoke cigarettes well into high school.[226] Perhaps the strong correlation between peer influence in middle school and cigarette smoking in high school may be explained by the addictive nature of cigarettes, which could lead many students to continue their smoking habits from middle school into late adolescence.
|
198 |
+
|
199 |
+
Until mid-to-late adolescence, boys and girls show relatively little difference in drinking motives.[227] Distinctions between the reasons for alcohol consumption of males and females begin to emerge around ages 14–15; overall, boys tend to view drinking in a more social light than girls, who report on average a more frequent use of alcohol as a coping mechanism.[227] The latter effect appears to shift in late adolescence and onset of early adulthood (20–21 years of age); however, despite this trend, age tends to bring a greater desire to drink for pleasure rather than coping in both boys and girls.[227]
|
200 |
+
|
201 |
+
Drinking habits and the motives behind them often reflect certain aspects of an individual's personality; in fact, four dimensions of the Five-Factor Model of personality demonstrate associations with drinking motives (all but 'Openness'). Greater enhancement motives for alcohol consumption tend to reflect high levels of extraversion and sensation-seeking in individuals; such enjoyment motivation often also indicates low conscientiousness, manifesting in lowered inhibition and a greater tendency towards aggression. On the other hand, drinking to cope with negative emotional states correlates strongly with high neuroticism and low agreeableness.[227] Alcohol use as a negative emotion control mechanism often links with many other behavioral and emotional impairments, such as anxiety, depression, and low self-esteem.[227]
|
202 |
+
|
203 |
+
Research has generally shown striking uniformity across different cultures in the motives behind teen alcohol use. Social engagement and personal enjoyment appear to play a fairly universal role in adolescents' decision to drink throughout separate cultural contexts. Surveys conducted in Argentina, Hong Kong, and Canada have each indicated the most common reason for drinking among adolescents to relate to pleasure and recreation; 80% of Argentinian teens reported drinking for enjoyment, while only 7% drank to improve a bad mood.[227] The most prevalent answers among Canadian adolescents were to "get in a party mood," 18%; "because I enjoy it," 16%; and "to get drunk," 10%.[227] In Hong Kong, female participants most frequently reported drinking for social enjoyment, while males most frequently reported drinking to feel the effects of alcohol.[227]
|
204 |
+
|
205 |
+
Much research has been conducted on the psychological ramifications of body image on adolescents. Modern day teenagers are exposed to more media on a daily basis than any generation before them. As such, modern day adolescents are exposed to many representations of ideal, societal beauty. The concept of a person being unhappy with their own image or appearance has been defined as "body dissatisfaction". In teenagers, body dissatisfaction is often associated with body mass, low self-esteem, and atypical eating patterns that can result in health procedures.[228][229] Scholars continue to debate the effects of media on body dissatisfaction in teens.[230][231]
|
206 |
+
|
207 |
+
Because exposure to media has increased over the past decade, adolescents' use of computers, cell phones, stereos and televisions to gain access to various mediums of popular culture has also increased. Almost all American households have at least one television, more than three-quarters of all adolescents' homes have access to the Internet, and more than 90% of American adolescents use the Internet at least occasionally.[232] As a result of the amount of time adolescents spend using these devices, their total media exposure is high. In the last decade, the amount of time that adolescents spend on the computer has greatly increased.[233] Online activities with the highest rates of use among adolescents are video games (78% of adolescents), email (73%), instant messaging (68%), social networking sites (65%), news sources (63%), music (59%), and videos (57%).
|
208 |
+
|
209 |
+
In the 2000s, social networking sites proliferated and a high proportion of adolescents used them: as of 2012 73% of 12–17 year olds reported having at least one social networking profile;[234] two-thirds (68%) of teens texted every day, half (51%) visited social networking sites daily, and 11% sent or received tweets at least once every day. More than a third (34%) of teens visited their main social networking site several times a day. One in four (23%) teens were "heavy" social media users, meaning they used at least two different types of social media each and every day.[235]
|
210 |
+
|
211 |
+
Although research has been inconclusive, some findings have indicated that electronic communication negatively affects adolescents' social development, replaces face-to-face communication, impairs their social skills, and can sometimes lead to unsafe interaction with strangers. A 2015 review reported that "adolescents lack awareness of strategies to cope with cyberbullying, which has been consistently associated with an increased likelihood of depression."[236] Studies have shown differences in the ways the internet negatively impacts the adolescents' social functioning. Online socializing tends to make girls particularly vulnerable, while socializing in Internet cafés seems only to affect boys academic achievement. However, other research suggests that Internet communication brings friends closer and is beneficial for socially anxious teens, who find it easier to interact socially online.[237] The more conclusive finding has been that Internet use has a negative effect on the physical health of adolescents, as time spent using the Internet replaces time doing physical activities. However, the Internet can be significantly useful in educating teens because of the access they have to information on many various topics.
|
212 |
+
|
213 |
+
A broad way of defining adolescence is the transition from child-to-adulthood. According to Hogan & Astone (1986), this transition can include markers such as leaving school, starting a full-time job, leaving the home of origin, getting married, and becoming a parent for the first time.[238] However, the time frame of this transition varies drastically by culture. In some countries, such as the United States, adolescence can last nearly a decade, but in others, the transition—often in the form of a ceremony—can last for only a few days.[239]
|
214 |
+
|
215 |
+
Some examples of social and religious transition ceremonies that can be found in the U.S., as well as in other cultures around the world, are Confirmation, Bar and Bat Mitzvahs, Quinceañeras, sweet sixteens, cotillions, and débutante balls. In other countries, initiation ceremonies play an important role, marking the transition into adulthood or the entrance into adolescence. This transition may be accompanied by obvious physical changes, which can vary from a change in clothing to tattoos and scarification.[218] Furthermore, transitions into adulthood may also vary by gender, and specific rituals may be more common for males or for females. This illuminates the extent to which adolescence is, at least in part, a social construction; it takes shape differently depending on the cultural context, and may be enforced more by cultural practices or transitions than by universal chemical or biological physical changes.
|
216 |
+
|
217 |
+
At the decision-making point of their lives, youth are susceptible to drug addiction, sexual abuse, peer pressure, violent crimes and other illegal activities. Developmental Intervention Science (DIS) is a fusion of the literature of both developmental and intervention sciences. This association conducts youth interventions that mutually assist both the needs of the community as well as psychologically stranded youth by focusing on risky and inappropriate behaviors while promoting positive self-development along with self-esteem among adolescents.[240]
|
218 |
+
|
219 |
+
The concept of adolescence has been criticized by experts, such as Robert Epstein, who state that an undeveloped brain is not the main cause of teenagers' turmoils.[241][242] Some have criticized the concept of adolescence because it is a relatively recent phenomenon in human history created by modern society,[243][244][245][246] and have been highly critical of what they view as the infantilization of young adults in American society.[247] In an article for Scientific American, Robert Epstein and Jennifer Ong state that "American-style teen turmoil is absent in more than 100 cultures around the world, suggesting that such mayhem is not biologically inevitable. Second, the brain itself changes in response to experiences, raising the question of whether adolescent brain characteristics are the cause of teen tumult or rather the result of lifestyle and experiences."[248] David Moshman has also stated in regards to adolescence that brain research "is crucial for a full picture, but it does not provide an ultimate explanation."[249]
|
220 |
+
|
221 |
+
Other critics of the concept of adolescence do point at individual differences in brain growth rate, citing that some (though not all) early teens still have infantile undeveloped corpus callosums, concluding that "the adult in *every* adolescent" is too generalizing. These people tend to support the notion that a more interconnected brain makes more precise distinctions (citing Pavlov's comparisons of conditioned reflexes in different species) and that there is a non-arbitrary threshold at which distinctions become sufficiently precise to correct assumptions afterward as opposed to being ultimately dependent on exterior assumptions for communication. They argue that this threshold is the one at which an individual is objectively capable of speaking for himself or herself, as opposed to culturally arbitrary measures of "maturity" which often treat this ability as a sign of "immaturity" merely because it leads to questioning of authorities. These people also stress the low probability of the threshold being reached at a birthday, and instead advocate non-chronological emancipation at the threshold of afterward correction of assumptions.[250] They sometimes cite similarities between "adolescent" behavior and KZ syndrome (inmate behavior in adults in prison camps) such as aggression being explainable by oppression and "immature" financial or other risk behavior being explainable by a way out of captivity being more worth to captive people than any incremental improvement in captivity, and argue that this theory successfully predicted remaining "immature" behavior after reaching the age of majority by means of longer-term traumatization. In this context, they refer to the fallibility of official assumptions about what is good or bad for an individual, concluding that paternalistic "rights" may harm the individual. They also argue that since it never took many years to move from one group to another to avoid inbreeding in the paleolithic, evolutionary psychology is unable to account for a long period of "immature" risk behavior.[251]
|
en/470.html.txt
ADDED
@@ -0,0 +1,49 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
An autobiography (from the Greek, αὐτός-autos self + βίος-bios life + γράφειν-graphein to write; also informally called an autobio[1]) is a self-written account of the life of oneself. The word "autobiography" was first used deprecatingly by William Taylor in 1797 in the English periodical The Monthly Review, when he suggested the word as a hybrid, but condemned it as "pedantic". However, its next recorded use was in its present sense, by Robert Southey in 1809.[2] Despite only being named early in the nineteenth century, first-person autobiographical writing originates in antiquity. Roy Pascal differentiates autobiography from the periodic self-reflective mode of journal or diary writing by noting that "[autobiography] is a review of a life from a particular moment in time, while the diary, however reflective it may be, moves through a series of moments in time".[3] Autobiography thus takes stock of the autobiographer's life from the moment of composition. While biographers generally rely on a wide variety of documents and viewpoints, autobiography may be based entirely on the writer's memory. The memoir form is closely associated with autobiography but it tends, as Pascal claims, to focus less on the self and more on others during the autobiographer's review of his or her life.[3]
|
4 |
+
|
5 |
+
Autobiographical works are by nature subjective. The inability—or unwillingness—of the author to accurately recall memories has in certain cases resulted in misleading or incorrect information. Some sociologists and psychologists have noted that autobiography offers the author the ability to recreate history.
|
6 |
+
|
7 |
+
Spiritual autobiography is an account of an author's struggle or journey towards God, followed by conversion a religious conversion, often interrupted by moments of regression. The author re-frames his or her life as a demonstration of divine intention through encounters with the Divine. The earliest example of a spiritual autobiography is Augustine's Confessions though the tradition has expanded to include other religious traditions in works such as Zahid Rohari's An Autobiography and Black Elk Speaks. The spiritual autobiography works as an endorsement of his or her religion.
|
8 |
+
|
9 |
+
A memoir is slightly different in character from an autobiography. While an autobiography typically focuses on the "life and times" of the writer, a memoir has a narrower, more intimate focus on his or her own memories, feelings and emotions. Memoirs have often been written by politicians or military leaders as a way to record and publish an account of their public exploits. One early example is that of Julius Caesar's Commentarii de Bello Gallico, also known as Commentaries on the Gallic Wars. In the work, Caesar describes the battles that took place during the nine years that he spent fighting local armies in the Gallic Wars. His second memoir, Commentarii de Bello Civili (or Commentaries on the Civil War) is an account of the events that took place between 49 and 48 BC in the civil war against Gnaeus Pompeius and the Senate.
|
10 |
+
|
11 |
+
Leonor López de Córdoba (1362–1420) wrote what is supposed to be the first autobiography in Spanish. The English Civil War (1642–1651) provoked a number of examples of this genre, including works by Sir Edmund Ludlow and Sir John Reresby. French examples from the same period include the memoirs of Cardinal de Retz (1614–1679) and the Duc de Saint-Simon.
|
12 |
+
|
13 |
+
The term "fictional autobiography" signifies novels about a fictional character written as though the character were writing their own autobiography, meaning that the character is the first-person narrator and that the novel addresses both internal and external experiences of the character. Daniel Defoe's Moll Flanders is an early example. Charles Dickens' David Copperfield is another such classic, and J.D. Salinger's The Catcher in the Rye is a well-known modern example of fictional autobiography. Charlotte Brontë's Jane Eyre is yet another example of fictional autobiography, as noted on the front page of the original version. The term may also apply to works of fiction purporting to be autobiographies of real characters, e.g., Robert Nye's Memoirs of Lord Byron.
|
14 |
+
|
15 |
+
In antiquity such works were typically entitled apologia, purporting to be self-justification rather than self-documentation. John Henry Newman's Christian confessional work (first published in 1864) is entitled Apologia Pro Vita Sua in reference to this tradition.
|
16 |
+
|
17 |
+
The Jewish historian Flavius Josephus introduces his autobiography (Josephi Vita, c. 99) with self-praise, which is followed by a justification of his actions as a Jewish rebel commander of Galilee.[4]
|
18 |
+
|
19 |
+
The pagan rhetor Libanius (c. 314–394) framed his life memoir (Oration I begun in 374) as one of his orations, not of a public kind, but of a literary kind that could not be aloud in privacy.
|
20 |
+
|
21 |
+
Augustine (354–430) applied the title Confessions to his autobiographical work, and Jean-Jacques Rousseau used the same title in the 18th century, initiating the chain of confessional and sometimes racy and highly self-critical, autobiographies of the Romantic era and beyond. Augustine's was arguably the first Western autobiography ever written, and became an influential model for Christian writers throughout the Middle Ages. It tells of the hedonistic lifestyle Augustine lived for a time within his youth, associating with young men who boasted of their sexual exploits; his following and leaving of the anti-sex and anti-marriage Manichaeism in attempts to seek sexual morality; and his subsequent return to Christianity due to his embracement of Skepticism and the New Academy movement (developing the view that sex is good, and that virginity is better, comparing the former to silver and the latter to gold; Augustine's views subsequently strongly influenced Western theology[5]). Confessions will always rank among the great masterpieces of western literature.[6]
|
22 |
+
|
23 |
+
In the spirit of Augustine's Confessions is the 12th-century Historia Calamitatum of Peter Abelard, outstanding as an autobiographical document of its period.
|
24 |
+
|
25 |
+
In the 15th century, Leonor López de Córdoba, a Spanish noblewoman, wrote her Memorias, which may be the first autobiography in Castillian.
|
26 |
+
|
27 |
+
Zāhir ud-Dīn Mohammad Bābur, who founded the Mughal dynasty of South Asia kept a journal Bāburnāma (Chagatai/Persian: بابر نامہ; literally: "Book of Babur" or "Letters of Babur") which was written between 1493 and 1529.
|
28 |
+
|
29 |
+
One of the first great autobiographies of the Renaissance is that of the sculptor and goldsmith Benvenuto Cellini (1500–1571), written between 1556 and 1558, and entitled by him simply Vita (Italian: Life). He declares at the start: "No matter what sort he is, everyone who has to his credit what are or really seem great achievements, if he cares for truth and goodness, ought to write the story of his own life in his own hand; but no one should venture on such a splendid undertaking before he is over forty."[7] These criteria for autobiography generally persisted until recent times, and most serious autobiographies of the next three hundred years conformed to them.
|
30 |
+
|
31 |
+
Another autobiography of the period is De vita propria, by the Italian mathematician, physician and astrologer Gerolamo Cardano (1574).
|
32 |
+
|
33 |
+
The earliest known autobiography written in English is the Book of Margery Kempe, written in 1438.[8] Following in the earlier tradition of a life story told as an act of Christian witness, the book describes Margery Kempe's pilgrimages to the Holy Land and Rome, her attempts to negotiate a celibate marriage with her husband, and most of all her religious experiences as a Christian mystic. Extracts from the book were published in the early sixteenth century but the whole text was published for the first time only in 1936.[9]
|
34 |
+
|
35 |
+
Possibly the first publicly available autobiography written in English was Captain John Smith's autobiography published in 1630[10] which was regarded by many as not much more than a collection of tall tales told by someone of doubtful veracity. This changed with the publication of Philip Barbour's definitive biography in 1964 which, amongst other things, established independent factual bases for many of Smith's "tall tales", many of which could not have been known by Smith at the time of writing unless he was actually present at the events recounted.[11]
|
36 |
+
|
37 |
+
Other notable English autobiographies of the 17th century include those of Lord Herbert of Cherbury (1643, published 1764) and John Bunyan (Grace Abounding to the Chief of Sinners, 1666).
|
38 |
+
|
39 |
+
Jarena Lee (1783–1864) was the first African American woman to have a published biography in the United States.[12]
|
40 |
+
|
41 |
+
Following the trend of Romanticism, which greatly emphasized the role and the nature of the individual, and in the footsteps of Jean-Jacques Rousseau's Confessions, a more intimate form of autobiography, exploring the subject's emotions, came into fashion. Stendhal's autobiographical writings of the 1830s, The Life of Henry Brulard and Memoirs of an Egotist, are both avowedly influenced by Rousseau.[13] An English example is William Hazlitt's Liber Amoris (1823), a painful examination of the writer's love-life.
|
42 |
+
|
43 |
+
With the rise of education, cheap newspapers and cheap printing, modern concepts of fame and celebrity began to develop, and the beneficiaries of this were not slow to cash in on this by producing autobiographies. It became the expectation—rather than the exception—that those in the public eye should write about themselves—not only writers such as Charles Dickens (who also incorporated autobiographical elements in his novels) and Anthony Trollope, but also politicians (e.g. Henry Brooks Adams), philosophers (e.g. John Stuart Mill), churchmen such as Cardinal Newman, and entertainers such as P. T. Barnum. Increasingly, in accordance with romantic taste, these accounts also began to deal, amongst other topics, with aspects of childhood and upbringing—far removed from the principles of "Cellinian" autobiography.
|
44 |
+
|
45 |
+
From the 17th century onwards, "scandalous memoirs" by supposed libertines, serving a public taste for titillation, have been frequently published. Typically pseudonymous, they were (and are) largely works of fiction written by ghostwriters. So-called "autobiographies" of modern professional athletes and media celebrities—and to a lesser extent about politicians—generally written by a ghostwriter, are routinely published. Some celebrities, such as Naomi Campbell, admit to not having read their "autobiographies".[citation needed] Some sensationalist autobiographies such as James Frey's A Million Little Pieces have been publicly exposed as having embellished or fictionalized significant details of the authors' lives.
|
46 |
+
|
47 |
+
Autobiography has become an increasingly popular and widely accessible form. A Fortunate Life by Albert Facey (1979) has become an Australian literary classic.[14] With the critical and commercial success in the United States of such memoirs as Angela’s Ashes and The Color of Water, more and more people have been encouraged to try their hand at this genre. Maggie Nelson's book The Argonauts is one of the recent autobiographies. Maggie Nelson calls it "autotheory"—a combination of autobiography and critical theory.[15]
|
48 |
+
|
49 |
+
A genre where the "claim for truth" overlaps with fictional elements though the work still purports to be autobiographical is autofiction.
|
en/4700.html.txt
ADDED
@@ -0,0 +1,35 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
A police officer, also known as an officer, policeman, or a policewoman is a warranted law employee of a police force. In most countries, "police officer" is a generic term not specifying a particular rank. In some, the use of the rank "officer" is legally reserved for military personnel.
|
4 |
+
|
5 |
+
Police officers are generally charged with the apprehension of suspects and the prevention, detection, and reporting of crime, protection and assistance of the general public, and the maintenance of public order. Police officers may be sworn to an oath, and have the power to arrest people and detain them for a limited time, along with other duties and powers. Some officers are trained in special duties, such as counter-terrorism, surveillance, child protection, VIP protection, civil law enforcement, and investigation techniques into major crime including fraud, rape, murder, and drug trafficking. Although many police officers wear a corresponding uniform, some police officers are plain-clothed in order to pass themselves off as civilians. In most countries police officers are given exemptions from certain laws to perform their duties. For example an officer may use force if necessary to arrest or detain a person when it would ordinarily be assault. In some countries, officers can also break road rules to perform their duties.[1]
|
6 |
+
|
7 |
+
The word "police" comes from the Greek politeia, meaning government, which came to mean its civil administration. The more general term for the function is law enforcement officer or peace officer. A sheriff is typically the top police officer of a county, with that word coming from the person enforcing law over a shire. A person who has been deputized to serve the function of the sheriff is referred to as the deputy.
|
8 |
+
|
9 |
+
Police officers are those empowered by government to enforce the laws it creates. In The Federalist collection of articles and essays, James Madison wrote: "If men were angels, no Government would be necessary". These words apply to those who serve government, including police. A common nickname for a police officer is "cop"; derived from the verb sense "to arrest", itself derived from "to grab". Thus, "someone who captures", a "copper", was shortened to just "cop".[2] It may also find its origin in the Latin capere, brought to English via the Old French caper.[3]
|
10 |
+
|
11 |
+
Responsibilities of a police officer are varied, and may differ greatly from within one political context to another. Typical duties relate to keeping the peace, law enforcement, protection of people and property and the investigation of crimes. Officers are expected to respond to a variety of situations that may arise while they are on duty. Rules and guidelines dictate how an officer should behave within the community, and in many contexts, restrictions are placed on what the uniformed officer wears. In some countries, rules and procedures dictate that a police officer is obliged to intervene in a criminal incident, even if they are off-duty. Police officers in nearly all countries retain their lawful powers while off duty.[4]
|
12 |
+
|
13 |
+
In the majority of Western legal systems, the major role of the police is to maintain order, keeping the peace through surveillance of the public, and the subsequent reporting and apprehension of suspected violators of the law. They also function to discourage crimes through high-visibility policing, and most police forces have an investigative capability. Police have the legal authority to arrest and detain, usually granted by magistrates. Police officers also respond to emergency calls, along with routine community policing.
|
14 |
+
|
15 |
+
Police are often used as an emergency service and may provide a public safety function at large gatherings, as well as in emergencies, disasters, search and rescue situations, and road traffic collisions. To provide a prompt response in emergencies, the police often coordinate their operations with fire and emergency medical services. In some countries, individuals serve jointly as police officers as well as firefighters (creating the role of fire police). In many countries, there is a common emergency service number that allows the police, firefighters, or medical services to be summoned to an emergency. Some countries, such as the United Kingdom have outlined command procedures, for the use in major emergencies or disorder. The Gold Silver Bronze command structure is a system set up to improve communications between ground-based officers and the control room, typically, Bronze Commander would be a senior officer on the ground, coordinating the efforts in the center of the emergency, Silver Commanders would be positioned in an 'Incident Control Room' erected to improve better communications at the scene, and a Gold Commander who would be in the Control Room.
|
16 |
+
|
17 |
+
Police are also responsible for reprimanding minor offenders by issuing citations which typically may result in the imposition of fines, particularly for violations of traffic law. Traffic enforcement is often and effectively accomplished by police officers on motorcycles—called motor officers, these officers refer to the motorcycles they ride on duty as simply motors. Police are also trained to assist persons in distress, such as motorists whose car has broken down and people experiencing a medical emergency. Police are typically trained in basic first aid such as CPR.
|
18 |
+
|
19 |
+
Some park rangers are commissioned as law enforcement officers and carry out a law-enforcement role within national parks and other back-country wilderness and recreational areas, whereas Military police perform law enforcement functions within the military.
|
20 |
+
|
21 |
+
In most countries, candidates for the police force must have completed some formal education.[5] Increasing numbers of people are joining the police force who possess tertiary education [6] and in response to this many police forces have developed a "fast-track" scheme whereby those with university degrees spend two to three years as a Constable before receiving promotion to higher ranks, such as Sergeants or Inspectors. (Officers who work within investigative divisions or plainclothes are not necessarily of a higher rank but merely have different duties.)[citation needed] Police officers are also recruited from those with experience in the military or security services. In the United States state laws may codify statewide qualification standards regarding age, education, criminal record, and training but in other places requirements are set by local police agencies. Each local Police agency has different requirements.
|
22 |
+
|
23 |
+
Promotion is not automatic and usually requires the candidate to pass some kind of examination, interview board or other selection procedure. Although promotion normally includes an increase in salary, it also brings with it an increase in responsibility and for most, an increase in administrative paperwork. There is no stigma attached to this, as experienced line patrol officers are highly regarded.
|
24 |
+
|
25 |
+
Dependent upon each agency, but generally after completing two years of service, officers may apply for specialist positions, such as detective, police dog handler, mounted police officer, motorcycle officer, water police officer, or firearms officer (in countries where police are not routinely armed).
|
26 |
+
|
27 |
+
In some countries, including Singapore, police ranks are supplemented through conscription, similar to national service in the military. Qualifications may thus be relaxed or enhanced depending on the target mix of conscripts. Conscripts face tougher physical requirements in areas such as eyesight, but minimum academic qualification requirements are less stringent. Some join as volunteers, again via differing qualification requirements.
|
28 |
+
|
29 |
+
In some societies, police officers are paid relatively well compared to other occupations; their pay depends on what rank they are within their police force and how many years they have served.[7] In the United States, an average police officer's salary is between $53,561 and $64,581 in 2020.[8] In the United Kingdom for the year 2015–16 a police officer's average salary was £30,901.[citation needed]
|
30 |
+
|
31 |
+
There are numerous issues affecting the safety and health of police officers, including line of duty deaths and occupational stress. On August 6, 2019, New Jersey Attorney General Gurbir Grewal announced creation of the first U.S. statewide program to support the mental health of police officers. The goal of the program would be to train officers in emotional resiliency and to help destigmatize mental health issues.[9]
|
32 |
+
|
33 |
+
Almost universally, police officers are authorized the use of force, up to and including deadly force, when acting in a law enforcement capacity.[10] Although most law enforcement agencies follow some variant of the use of force continuum, where officers are only authorized the level of force required to match situational requirements, specific thresholds and responses vary between jurisdictions.[11] While officers are trained to avoid excessive use of force, and may be held legally accountable for infractions, the variability of law enforcement and its dependence on human judgment have made the subject an area of controversy and research.[12][13]
|
34 |
+
|
35 |
+
In the performance of their duties, police officers may act unlawfully, either deliberately or as a result of errors in judgment.[14] Police accountability efforts strive to protect citizens and their rights by ensuring legal and effective law enforcement conduct, while affording individual officers the required autonomy, protection, and discretion. As an example, the use of body-worn cameras has been shown to reduce both instances of misconduct and complaints against officers.[15]
|
en/4701.html.txt
ADDED
@@ -0,0 +1,23 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
A politician is a person active in party politics, or a person holding or seeking an office in government. Politicians propose, support and create laws or policies that govern the land and, by extension, its people. Broadly speaking, a "politician" can be anyone who seeks to achieve political power in any bureaucratic institution.
|
4 |
+
|
5 |
+
Politicians are people who are politically active, especially in party politics. Positions range from local offices to executive, legislative, and judicial offices of regional and national governments.[1][2] Some elected law enforcement officers, such as sheriffs, are considered politicians.[3][4]
|
6 |
+
|
7 |
+
Politicians are known for their rhetoric, as in speeches or campaign advertisements. They are especially known for using common themes that allow them to develop their political positions in terms familiar to the voters.[5] Politicians of necessity become expert users of the media.[6] Politicians in the 19th century made heavy use of newspapers, magazines, and pamphlets, as well as posters.[7] In the 20th century, they branched into radio and television, making television commercials the single most expensive part of an election campaign.[8] In the 21st century, they have become increasingly involved with the social media based on the Internet and smartphones.[9]
|
8 |
+
|
9 |
+
Rumor has always played a major role in politics, with negative rumors about an opponent typically more effective than positive rumors about one's own side.[10]
|
10 |
+
|
11 |
+
Once elected, the politician becomes a government official and has to deal with a permanent bureaucracy of non-politicians. Historically, there has been a subtle conflict between the long-term goals of each side.[11] In patronage-based systems, such as the United States and Canada in the 19th century, winning politicians replace the bureaucracy with local politicians who formed their base of support, the "spoils system". Civil service reform was initiated to eliminate the corruption of government services that were involved.[12] However, in many less developed countries, the spoils system is in full-scale operation today.[13]
|
12 |
+
|
13 |
+
Mattozzi and Merlo argue that there are two main career paths which are typically followed by politicians in modern democracies. First, come the career politicians. They are politicians who work in the political sector until retirement. Second, are the "political careerists". These are politicians who gain a reputation for expertise in controlling certain bureaucracies, then leave politics for a well-paid career in the private sector making use of their political contacts.[14]
|
14 |
+
|
15 |
+
The personal histories of politicians have been frequently studied, as it is presumed that their experiences and characteristics shape their beliefs and behaviors. There are four pathways by which a politician's biography could influence their leadership style and abilities. The first is that biography may influence one's core beliefs, which are used to shape a worldview. The second is that politicians' skills and competence are influenced by personal experience. The areas of skill and competence can define where they devote resources and attention as a leader. The third pathway is that biographical attributes may define and shape political incentives. A leader's previous profession, for example, could be viewed as higher importance, causing a disproportionate investment of leadership resources to ensure the growth and health of that profession, including former colleagues. Other examples beside profession include the politician's innate characteristics, such as race or gender. The fourth pathway is how a politician's biography affects their public perception, which can, in turn, affect their leadership style. Female politicians, for example, may use different strategies to attract the same level of respect given to male politicians.[15]
|
16 |
+
|
17 |
+
Numerous scholars have studied the characteristics of politicians, comparing those at the local and national levels, and comparing the more liberal or the more conservative ones, and comparing the more successful and less successful in terms of elections.[16] In recent years, special attention has focused on the distinctive career path of women politicians.[17] For example, there are studies of the "Supermadre" model in Latin American politics.[18]
|
18 |
+
|
19 |
+
Many politicians have the knack to remember thousands of names and faces and recall personal anecdotes about their constituents—it is an advantage in the job, rather like being seven-foot tall for a basketball player. United States Presidents George W. Bush and Bill Clinton were renowned for their memories.[19][20]
|
20 |
+
|
21 |
+
Many critics attack politicians for being out of touch with the public. Areas of friction include the manner in which politicians speak, which has been described as being overly formal and filled with many euphemistic and metaphorical expressions and commonly perceived as an attempt to "obscure, mislead, and confuse".[21]
|
22 |
+
|
23 |
+
In the popular image, politicians are thought of as clueless, selfish, incompetent and corrupt, taking money in exchange for goods or services, rather than working for the general public good.[22] Politicians in many countries are regarded as the "most hated professionals".[23]
|
en/4702.html.txt
ADDED
@@ -0,0 +1,158 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Politics (from Greek: Πολιτικά, politiká, 'affairs of the cities') is the set of activities that are associated with making decisions in groups, or other forms of power relations between individuals, such as the distribution of resources or status. The academic study of politics is referred to as political science.
|
4 |
+
|
5 |
+
Politics is a multifaceted word. It may be used positively in the context of a "political solution" which is compromising and non-violent,[1] or descriptively as "the art or science of government", but also often carries a negative connotation.[2] For example, abolitionist Wendell Phillips declared that "we do not play politics; anti-slavery is no half-jest with us."[3] The concept has been defined in various ways, and different approaches have fundamentally differing views on whether it should be used extensively or limitedly, empirically or normatively, and on whether conflict or co-operation is more essential to it.
|
6 |
+
|
7 |
+
A variety of methods are deployed in politics, which include promoting one's own political views among people, negotiation with other political subjects, making laws, and exercising force, including warfare against adversaries.[4][5][6][7][8] Politics is exercised on a wide range of social levels, from clans and tribes of traditional societies, through modern local governments, companies and institutions up to sovereign states, to the international level. In modern nation states, people often form political parties to represent their ideas. Members of a party often agree to take the same position on many issues and agree to support the same changes to law and the same leaders. An election is usually a competition between different parties.
|
8 |
+
|
9 |
+
A political system is a framework which defines acceptable political methods within a society. The history of political thought can be traced back to early antiquity, with seminal works such as Plato's Republic, Aristotle's Politics, Chanakya's Arthashastra and Chanakya Niti (3rd century BCE), as well as the works of Confucius.[9]
|
10 |
+
|
11 |
+
The English politics has its roots in the name of Aristotle's classic work, Politiká, which introduced the Greek term politiká (Πολιτικά, 'affairs of the cities'). In the mid-15th century, Aristotle's composition would be rendered in Early Modern English as Polettiques [sic],[a][10] which would become Politics in Modern English.
|
12 |
+
|
13 |
+
The singular politic first attested in English in 1430, coming from Middle French politique—itself taking from politicus,[11] a Latinization of the Greek πολιτικός (politikos) from πολίτης (polites, 'citizen') and πόλις (polis, 'city').[12]
|
14 |
+
|
15 |
+
Politics comprises all the activities of co-operation, negotiation and conflict within and between societies, whereby people go about organizing the use, production or distribution of human, natural and other resources in the course of the production and reproduction of their biological and social life.[17]
|
16 |
+
|
17 |
+
There are several ways in which approaching politics has been conceptualized.
|
18 |
+
|
19 |
+
Adrian Leftwich has differentiated views of politics based on how extensive or limited their perception of what accounts as 'political' is.[18] The extensive view sees politics as present across the sphere of human social relations, while the limited view restricts it to certain contexts. For example, in a more restrictive way, politics may be viewed as primarily about governance,[19] while a feminist perspective could argue that sites which have been viewed traditionally as non-political, should indeed be viewed as political as well.[20] This latter position is encapsulated in the slogan the personal is political, which disputes the distinction between private and public issues. Instead, politics may be defined by the use of power, as has been argued by Robert A. Dahl.[21]
|
20 |
+
|
21 |
+
Some perspectives on politics view it empirically as an exercise of power, while other see it as a social function with a normative basis.[22] This distinction has been called the difference between political moralism and political realism.[23] For moralists, politics is closely linked to ethics, and is at its extreme in utopian thinking.[23] For example, according to Hannah Arendt, the view of Aristotle was that "to be political…meant that everything was decided through words and persuasion and not through violence;"[24] while according to Bernard Crick "[p]olitics is the way in which free societies are governed. Politics is politics and other forms of rule are something else."[25] In contrast, for realists, represented by those such as Niccolò Machiavelli, Thomas Hobbes, and Harold Lasswell, politics is based on the use of power, irrespective of the ends being pursued.[26][23]
|
22 |
+
|
23 |
+
Agonism argues that politics essentially comes down to conflict between conflicting interests. Political scientist Elmer Schattschneider argued that "at the root of all politics is the universal language of conflict,"[27] while for Carl Schmitt the essence of politics is the distinction of 'friend' from foe'.[28] This is in direct contrast to the more co-operative views of politics by Aristotle and Crick. However, a more mixed view between these extremes is provided by Irish author Michael Laver, who noted that:
|
24 |
+
|
25 |
+
Politics is about the characteristic blend of conflict and co-operation that can be found so often in human interactions. Pure conflict is war. Pure co-operation is true love. Politics is a mixture of both.[29]
|
26 |
+
|
27 |
+
The history of politics spans human history and is not limited to modern institutions of government.
|
28 |
+
|
29 |
+
Frans de Waal argued that already chimpanzees engage in politics through "social manipulation to secure and maintain influential positions."[30] Early human forms of social organization—bands and tribes—lacked centralized political structures.[31] These are sometimes referred to as stateless societies.
|
30 |
+
|
31 |
+
In ancient history, civilizations did not have definite boundaries as states have today, and their borders could be more accurately described as frontiers. Early dynastic Sumer, and early dynastic Egypt were the first civilizations to define their borders. Moreover, up to the 12th century, many people lived in non-state societies. These range from relatively egalitarian bands and tribes to complex and highly stratified chiefdoms.
|
32 |
+
|
33 |
+
There are a number of different theories and hypotheses regarding early state formation that seek generalizations to explain why the state developed in some places but not others. Other scholars believe that generalizations are unhelpful and that each case of early state formation should be treated on its own.[32]
|
34 |
+
|
35 |
+
Voluntary theories contend that diverse groups of people came together to form states as a result of some shared rational interest.[33] The theories largely focus on the development of agriculture, and the population and organizational pressure that followed and resulted in state formation. One of the most prominent theories of early and primary state formation is the hydraulic hypothesis, which contends that the state was a result of the need to build and maintain large-scale irrigation projects.[34]
|
36 |
+
|
37 |
+
Conflict theories of state formation regard conflict and dominance of some population over another population as key to the formation of states.[33] In contrast with voluntary theories, these arguments believe that people do not voluntarily agree to create a state to maximize benefits, but that states form due to some form of oppression by one group over others. Some theories in turn argue that warfare was critical for state formation.[33]
|
38 |
+
|
39 |
+
The first states of sorts were those of early dynastic Sumer and early dynastic Egypt, which arose from the Uruk period and Predynastic Egypt respectively around approximately 3000 BCE.[35] Early dynastic Egypt was based around the Nile River in the north-east of Africa, the kingdom's boundaries being based around the Nile and stretching to areas where oases existed.[36] Early dynastic Sumer was located in southern Mesopotamia with its borders extending from the Persian Gulf to parts of the Euphrates and Tigris rivers.[35]
|
40 |
+
|
41 |
+
Although state-forms existed before the rise of the Ancient Greek empire, the Greeks were the first people known to have explicitly formulated a political philosophy of the state, and to have rationally analyzed political institutions. Prior to this, states were described and justified in terms of religious myths.[37]
|
42 |
+
|
43 |
+
Several important political innovations of classical antiquity came from the Greek city-states (polis) and the Roman Republic. The Greek city-states before the 4th century granted citizenship rights to their free population; in Athens these rights were combined with a directly democratic form of government that was to have a long afterlife in political thought and history.
|
44 |
+
|
45 |
+
The Peace of Westphalia (1648) is considered by political scientists to be the beginning of the modern international system,[38][39][40] in which external powers should avoid interfering in another country's domestic affairs.[41] The principle of non-interference in other countries' domestic affairs was laid out in the mid-18th century by Swiss jurist Emer de Vattel.[42] States became the primary institutional agents in an interstate system of relations. The Peace of Westphalia is said to have ended attempts to impose supranational authority on European states. The "Westphalian" doctrine of states as independent agents was bolstered by the rise in 19th century thought of nationalism, under which legitimate states were assumed to correspond to nations—groups of people united by language and culture.[citation needed]
|
46 |
+
|
47 |
+
In Europe, during the 18th century, the classic non-national states were the multinational empires: the Austrian Empire, Kingdom of France, Kingdom of Hungary,[43] the Russian Empire, the Spanish Empire, the Ottoman Empire, and the British Empire. Such empires also existed in Asia, Africa, and the Americas; in the Muslim world, immediately after the death of Muhammad in 632, Caliphates were established, which developed into multi-ethnic trans-national empires.[44] The multinational empire was an absolute monarchy ruled by a king, emperor or sultan. The population belonged to many ethnic groups, and they spoke many languages. The empire was dominated by one ethnic group, and their language was usually the language of public administration. The ruling dynasty was usually, but not always, from that group. Some of the smaller European states were not so ethnically diverse, but were also dynastic states, ruled by a royal house. A few of the smaller states survived, such as the independent principalities of Liechtenstein, Andorra, Monaco, and the republic of San Marino.
|
48 |
+
|
49 |
+
Most theories see the nation state as a 19th-century European phenomenon, facilitated by developments such as state-mandated education, mass literacy, and mass media. However, historians[who?] also note the early emergence of a relatively unified state and identity in Portugal and the Dutch Republic.[citation needed] Scholars such as Steven Weber, David Woodward, Michel Foucault, and Jeremy Black have advanced the hypothesis that the nation state did not arise out of political ingenuity or an unknown undetermined source, nor was it an accident of history or political invention.[45][46][47] Rather, the nation state is an inadvertent byproduct of 15th-century intellectual discoveries in political economy, capitalism, mercantilism, political geography, and geography[48][49] combined together with cartography[50][51] and advances in map-making technologies.[52][53]
|
50 |
+
|
51 |
+
Some nation states, such as Germany and Italy, came into existence at least partly as a result of political campaigns by nationalists, during the 19th century. In both cases, the territory was previously divided among other states, some of them very small. Liberal ideas of free trade played a role in German unification, which was preceded by a customs union, the Zollverein. National self-determination was a key aspect of United States President Woodrow Wilson's Fourteen Points, leading to the dissolution of the Austro-Hungarian Empire and the Ottoman Empire after the First World War, while the Russian Empire became the Soviet Union after the Russian Civil War. Decolonization lead to the creation of new nation states in place of multinational empires in the Third World.
|
52 |
+
|
53 |
+
Political globalization began in the 20th century through intergovernmental organizations and supranational unions. The League of Nations was founded after World War I, and after World War II it was replaced by the United Nations. Various international treaties have been signed through it. Regional integration has been pursued by the African Union, ASEAN, the European Union, and Mercosur. International political institutions on the international level include the International Criminal Court, the International Monetary Fund, and the World Trade Organization.
|
54 |
+
|
55 |
+
The study of politics is called political science, or politology. It comprises numerous subfields, including comparative politics, political economy, international relations, political philosophy, public administration, public policy, and political methodology. Furthermore, political science is related to, and draws upon, the fields of economics, law, sociology, history, philosophy, geography, psychology/psychiatry, anthropology, and neurosciences.
|
56 |
+
|
57 |
+
Comparative politics is the science of comparison and teaching of different types of constitutions, political actors, legislature and associated fields, all of them from an intrastate perspective. International relations deals with the interaction between nation-states as well as intergovernmental and transnational organizations. Political philosophy is more concerned with contributions of various classical and contemporary thinkers and philosophers.
|
58 |
+
|
59 |
+
Political science is methodologically diverse and appropriates many methods originating in psychology, social research, and cognitive neuroscience. Approaches include positivism, interpretivism, rational choice theory, behavioralism, structuralism, post-structuralism, realism, institutionalism, and pluralism. Political science, as one of the social sciences, uses methods and techniques that relate to the kinds of inquiries sought: primary sources such as historical documents and official records, secondary sources such as scholarly journal articles, survey research, statistical analysis, case studies, experimental research, and model building.
|
60 |
+
|
61 |
+
The political system defines the process for making official government decisions. It is usually compared to the legal system, economic system, cultural system, and other social systems. According to David Easton, "A political system can be designated as the interactions through which values are authoritatively allocated for a society."[54] Each political system is embedded in a society with its own political culture, and they in turn shape their societies through public policy. The interactions between different political systems are the basis for global politics.
|
62 |
+
|
63 |
+
Forms of government can be classified by several ways. In terms of the structure of power, there are monarchies (including constitutional monarchies) and republics (usually presidential, semi-presidential, or parliamentary).
|
64 |
+
|
65 |
+
The separation of powers describes the degree of horizontal integration between the legislature, the executive, the judiciary, and other independent institutions.
|
66 |
+
|
67 |
+
The source of power determines the difference between democracies, oligarchies, and autocracies.
|
68 |
+
|
69 |
+
In a democracy, political legitimacy is based on popular sovereignty. Forms of democracy include representative democracy, direct democracy, and demarchy. These are separated by the way decisions are made, whether by elected representatives, referenda, or by citizen juries. Democracies can be either republics or constitutional monarchies.
|
70 |
+
|
71 |
+
Oligarchy is a power structure where a minority rules. These may be in the form of anocracy, aristocracy, ergatocracy, geniocracy, gerontocracy, kakistocracy, kleptocracy, meritocracy, noocracy, particracy, plutocracy, stratocracy, technocracy, theocracy, or timocracy.
|
72 |
+
|
73 |
+
Autocracies are either dictatorships (including military dictatorships) or absolute monarchies.
|
74 |
+
|
75 |
+
In terms of level of vertical integration, political systems can be divided into (from least to most integrated) confederations, federations, and unitary states.
|
76 |
+
|
77 |
+
A federation (also known as a federal state) is a political entity characterized by a union of partially self-governing provinces, states, or other regions under a central federal government (federalism). In a federation, the self-governing status of the component states, as well as the division of power between them and the central government, is typically constitutionally entrenched and may not be altered by a unilateral decision of either party, the states or the federal political body. Federations were formed first in Switzerland, then in the United States in 1776, in Canada in 1867 and in Germany in 1871 and in 1901, Australia. Compared to a federation, a confederation has less centralized power.
|
78 |
+
|
79 |
+
All the above forms of government are variations of the same basic polity, the sovereign state. The state has been defined by Max Weber as a political entity that has monopoly on violence within its territory, while the Montevideo Convention holds that states need to have a defined territory; a permanent population; a government; and a capacity to enter into international relations.
|
80 |
+
|
81 |
+
A stateless society is a society that is not governed by a state.[55] In stateless societies, there is little concentration of authority; most positions of authority that do exist are very limited in power and are generally not permanently held positions; and social bodies that resolve disputes through predefined rules tend to be small.[56] Stateless societies are highly variable in economic organization and cultural practices.[57]
|
82 |
+
|
83 |
+
While stateless societies were the norm in human prehistory, few stateless societies exist today; almost the entire global population resides within the jurisdiction of a sovereign state. In some regions nominal state authorities may be very weak and wield little or no actual power. Over the course of history most stateless peoples have been integrated into the state-based societies around them.[58]
|
84 |
+
|
85 |
+
|
86 |
+
|
87 |
+
Some political philosophies consider the state undesirable, and thus consider the formation of a stateless society a goal to be achieved. A central tenet of anarchism is the advocacy of society without states.[55][59] The type of society sought for varies significantly between anarchist schools of thought, ranging from extreme individualism to complete collectivism.[60] In Marxism, Marx's theory of the state considers that in a post-capitalist society the state, an undesirable institution, would be unnecessary and wither away.[61] A related concept is that of stateless communism, a phrase sometimes used to describe Marx's anticipated post-capitalist society.
|
88 |
+
|
89 |
+
Constitutions are written documents that specify and limit the powers of the different branches of government. Although a constitution is a written document, there is also an unwritten constitution. The unwritten constitution is continually being written by the legislative and judiciary branch of government; this is just one of those cases in which the nature of the circumstances determines the form of government that is most appropriate.[62] England did set the fashion of written constitutions during the Civil War but after the Restoration abandoned them to be taken up later by the American Colonies after their emancipation and then France after the Revolution and the rest of Europe including the European colonies.
|
90 |
+
|
91 |
+
Constitutions often set out separation of powers, dividing the government into the executive, the legislature, and the judiciary (together referred to as the trias politica), in order to achieve checks and balances within the state. Additional independent branches may also be created, including civil service commissions, election commissions, and supreme audit institutions.
|
92 |
+
|
93 |
+
Political culture describes how culture impacts politics. Every political system is embedded in a particular political culture.[63] Lucian Pye's definition is that "Political culture is the set of attitudes, beliefs, and sentiments, which give order and meaning to a political process and which provide the underlying assumptions and rules that govern behavior in the political system".[63]
|
94 |
+
|
95 |
+
Trust is a major factor in political culture, as its level determines the capacity of the state to function.[64] Postmaterialism is the degree to which a political culture is concerned with issues which are not of immediate physical or material concern, such as human rights and environmentalism.[63] Religion has also an impact on political culture.[64]
|
96 |
+
|
97 |
+
Political corruption is the use of powers for illegitimate private gain, conducted by government officials or their network contacts. Forms of political corruption include bribery, cronyism, nepotism, and political patronage. Forms of political patronage, in turn, includes clientelism, earmarking, pork barreling, slush funds, and spoils systems; as well as political machines, which is a political system that operates for corrupt ends.
|
98 |
+
|
99 |
+
When corruption is embedded in political culture, this may be referred to as patrimonialism or neopatrimonialism. A form of government that is built on corruption is called a kleptocracy ('rule of thieves').
|
100 |
+
|
101 |
+
Political conflict entails the use of political violence to achieve political ends. As noted by Carl von Clausewitz, "War is a mere continuation of politics by other means."[65] Beyond just inter-state warfare, this may include civil war; wars of national liberation; or asymmetric warfare, such as guerrilla war or terrorism. When a political system is overthrown, the event is called a revolution: it is a political revolution if it does not go further; or a social revolution if the social system is also radically altered. However, these may also be nonviolent revolutions.
|
102 |
+
|
103 |
+
Macropolitics can either describe political issues that affect an entire political system (e.g. the nation state), or refer to interactions between political systems (e.g. international relations).[66]
|
104 |
+
|
105 |
+
Global politics (or world politics) covers all aspects of politics that affect multiple political systems, in practice meaning any political phenomenon crossing national borders. This can include cities, nation-states, multinational corporations, non-governmental organizations, and/or international organizations. An important element is international relations: the relations between nation-states may be peaceful when they are conducted through diplomacy, or they may be violent, which is described as war. States that are able to exert strong international influence are referred to as superpowers, whereas less-powerful ones may be called regional or middle powers. The international system of power is called the world order, which is affected by the balance of power that defines the degree of polarity in the system. Emerging powers are potentially destabilizing to it, especially if they display revanchism or irredentism.
|
106 |
+
|
107 |
+
Politics inside the limits of political systems, which in contemporary context correspond to national borders, are referred to as domestic politics. This includes most forms of public policy, such as social policy, economic policy, or law enforcement, which are executed by the state bureaucracy.
|
108 |
+
|
109 |
+
Mesopolitics describes the politics of intermediary structures within a political system, such as national political parties or movements.[66]
|
110 |
+
|
111 |
+
A political party is a political organization that typically seeks to attain and maintain political power within government, usually by participating in political campaigns, educational outreach, or protest actions. Parties often espouse an expressed ideology or vision, bolstered by a written platform with specific goals, forming a coalition among disparate interests.[67]
|
112 |
+
|
113 |
+
Political parties within a particular political system together form the party system, which can be either multiparty, two-party, dominant-party, or one-party, depending on the level of pluralism. This is affected by characteristics of the political system, including its electoral system. According to Duverger's law, first-past-the-post systems are likely to lead to two-party systems, while proportional representation systems are more likely to create a multiparty system.
|
114 |
+
|
115 |
+
Micropolitics describes the actions of individual actors within the political system.[66] This is often described as political participation.[68] Political participation may take many forms, including:
|
116 |
+
|
117 |
+
Democracy is a system of processing conflicts in which outcomes depend on what participants do, but no single force controls what occurs and its outcomes. The uncertainty of outcomes is inherent in democracy. Democracy makes all forces struggle repeatedly to realize their interests and devolves power from groups of people to sets of rules.[69]
|
118 |
+
|
119 |
+
Among modern political theorists, there are three contending conceptions of democracy: aggregative, deliberative, and radical.[70]
|
120 |
+
|
121 |
+
Multi-member constituencies, majoritarian:
|
122 |
+
|
123 |
+
Multi-member constituencies, proportional:
|
124 |
+
|
125 |
+
Indirect election:
|
126 |
+
|
127 |
+
|
128 |
+
|
129 |
+
The theory of aggregative democracy claims that the aim of the democratic processes is to solicit the preferences of citizens, and aggregate them together to determine what social policies the society should adopt. Therefore, proponents of this view hold that democratic participation should primarily focus on voting, where the policy with the most votes gets implemented.
|
130 |
+
|
131 |
+
Different variants of aggregative democracy exist. Under minimalism, democracy is a system of government in which citizens have given teams of political leaders the right to rule in periodic elections. According to this minimalist conception, citizens cannot and should not "rule" because, for example, on most issues, most of the time, they have no clear views or their views are not well-founded. Joseph Schumpeter articulated this view most famously in his book Capitalism, Socialism, and Democracy.[71] Contemporary proponents of minimalism include William H. Riker, Adam Przeworski, Richard Posner.
|
132 |
+
|
133 |
+
According to the theory of direct democracy, on the other hand, citizens should vote directly, not through their representatives, on legislative proposals. Proponents of direct democracy offer varied reasons to support this view. Political activity can be valuable in itself, it socializes and educates citizens, and popular participation can check powerful elites. Most importantly, citizens do not rule themselves unless they directly decide laws and policies.
|
134 |
+
|
135 |
+
Governments will tend to produce laws and policies that are close to the views of the median voter—with half to their left and the other half to their right. This is not a desirable outcome as it represents the action of self-interested and somewhat unaccountable political elites competing for votes. Anthony Downs suggests that ideological political parties are necessary to act as a mediating broker between individual and governments. Downs laid out this view in his 1957 book An Economic Theory of Democracy.[72]
|
136 |
+
|
137 |
+
Robert A. Dahl argues that the fundamental democratic principle is that, when it comes to binding collective decisions, each person in a political community is entitled to have his/her interests be given equal consideration (not necessarily that all people are equally satisfied by the collective decision). He uses the term polyarchy to refer to societies in which there exists a certain set of institutions and procedures which are perceived as leading to such democracy. First and foremost among these institutions is the regular occurrence of free and open elections which are used to select representatives who then manage all or most of the public policy of the society. However, these polyarchic procedures may not create a full democracy if, for example, poverty prevents political participation.[73] Similarly, Ronald Dworkin argues that "democracy is a substantive, not a merely procedural, ideal."[74]
|
138 |
+
|
139 |
+
Deliberative democracy is based on the notion that democracy is government by deliberation. Unlike aggregative democracy, deliberative democracy holds that, for a democratic decision to be legitimate, it must be preceded by authentic deliberation, not merely the aggregation of preferences that occurs in voting. Authentic deliberation is deliberation among decision-makers that is free from distortions of unequal political power, such as power a decision-maker obtained through economic wealth or the support of interest groups.[75][76][77] If the decision-makers cannot reach consensus after authentically deliberating on a proposal, then they vote on the proposal using a form of majority rule.
|
140 |
+
|
141 |
+
Radical democracy is based on the idea that there are hierarchical and oppressive power relations that exist in society. Democracy's role is to make visible and challenge those relations by allowing for difference, dissent and antagonisms in decision-making processes.
|
142 |
+
|
143 |
+
Equality is a state of affairs in which all people within a specific society or isolated group have the same social status, especially socioeconomic status, including protection of human rights and dignity, and equal access to certain social goods and social services. Furthermore, it may also include health equality, economic equality and other social securities. Social equality requires the absence of legally enforced social class or caste boundaries and the absence of discrimination motivated by an inalienable part of a person's identity. To this end there must be equal justice under law, and equal opportunity regardless of, for example, sex, gender, ethnicity, age, sexual orientation, origin, caste or class, income or property, language, religion, convictions, opinions, health or disability.
|
144 |
+
|
145 |
+
A common way of understanding politics is through the left–right political spectrum, which ranges from left-wing politics via centrism to right-wing politics. This classification is comparatively recent and dates from the French Revolution, when those members of the National Assembly who supported the republic, the common people and a secular society sat on the left and supporters of the monarchy, aristocratic privilege and the Church sat on the right.[86]
|
146 |
+
|
147 |
+
Today, the left is generally progressivist, seeking social progress in society. The more extreme elements of the left, named the far-left, tend to support revolutionary means for achieving this. This includes ideologies such as Communism and Marxism. The center-left, on the other hand, advocate for more reformist approaches, for example that of social democracy.
|
148 |
+
|
149 |
+
In contrast, the right is generally motivated by conservatism, which seeks to conserve what it sees as the important elements of society. The far-right goes beyond this, and often represents a reactionary turn against progress, seeking to undo it. Examples of such ideologies have included Fascism and Nazism. The center-right may be less clear-cut and more mixed in this regard, with neoconservatives supporting the spread of democracy, and one-nation conservatives more open to social welfare programs.
|
150 |
+
|
151 |
+
According to Norberto Bobbio, one of the major exponents of this distinction, the left believes in attempting to eradicate social inequality—believing it to be unethical or unnatural,[87] while the right regards most social inequality as the result of ineradicable natural inequalities, and sees attempts to enforce social equality as utopian or authoritarian.[88]
|
152 |
+
Some ideologies, notably Christian Democracy, claim to combine left and right-wing politics; according to Geoffrey K. Roberts and Patricia Hogwood, "In terms of ideology, Christian Democracy has incorporated many of the views held by liberals, conservatives and socialists within a wider framework of moral and Christian principles."[89] Movements which claim or formerly claimed to be above the left-right divide include Fascist Terza Posizione economic politics in Italy and Peronism in Argentina.[90][91]
|
153 |
+
|
154 |
+
Political freedom (also known as political liberty or autonomy) is a central concept in political thought and one of the most important features of democratic societies. Negative liberty has been described as freedom from oppression or coercion and unreasonable external constraints on action, often enacted through civil and political rights, while positive liberty is the absence of disabling conditions for an individual and the fulfillment of enabling conditions, e.g. economic compulsion, in a society. This capability approach to freedom requires economic, social and cultural rights in order to be realized.
|
155 |
+
|
156 |
+
Authoritarianism and libertarianism disagree the amount of individual freedom each person possesses in that society relative to the state. One author describes authoritarian political systems as those where "individual rights and goals are subjugated to group goals, expectations and conformities,"[92] while libertarians generally oppose the state and hold the individual as sovereign. In their purest form, libertarians are anarchists,[93] who argue for the total abolition of the state, of political parties and of other political entities, while the purest authoritarians are, by definition, totalitarians who support state control over all aspects of society.[94]
|
157 |
+
|
158 |
+
For instance, classical liberalism (also known as laissez-faire liberalism)[95] is a doctrine stressing individual freedom and limited government. This includes the importance of human rationality, individual property rights, free markets, natural rights, the protection of civil liberties, constitutional limitation of government, and individual freedom from restraint as exemplified in the writings of John Locke, Adam Smith, David Hume, David Ricardo, Voltaire, Montesquieu and others. According to the libertarian Institute for Humane Studies, "the libertarian, or 'classical liberal,' perspective is that individual well-being, prosperity, and social harmony are fostered by 'as much liberty as possible' and 'as little government as necessary.'"[96] For anarchist political philosopher L. Susan Brown (1993), "liberalism and anarchism are two political philosophies that are fundamentally concerned with individual freedom yet differ from one another in very distinct ways. Anarchism shares with liberalism a radical commitment to individual freedom while rejecting liberalism's competitive property relations."[97]
|
en/4703.html.txt
ADDED
@@ -0,0 +1,78 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Pollen is a powdery substance consisting of pollen grains which are male microgametophytes of seed plants, which produce male gametes (sperm cells). Pollen grains have a hard coat made of sporopollenin that protects the gametophytes during the process of their movement from the stamens to the pistil of flowering plants, or from the male cone to the female cone of coniferous plants. If pollen lands on a compatible pistil or female cone, it germinates, producing a pollen tube that transfers the sperm to the ovule containing the female gametophyte. Individual pollen grains are small enough to require magnification to see detail. The study of pollen is called palynology and is highly useful in paleoecology, paleontology, archaeology, and forensics.
|
2 |
+
Pollen in plants is used for transferring haploid male genetic material from the anther of a single flower to the stigma of another in cross-pollination.[1] In a case of self-pollination, this process takes place from the anther of a flower to the stigma of the same flower.[1]
|
3 |
+
|
4 |
+
Pollen is infrequently used as food and food supplement. Because of agricultural practices, it is often contaminated by agricultural pesticides.[2]
|
5 |
+
|
6 |
+
Pollen itself is not the male gamete.[3] Each pollen grain contains vegetative (non-reproductive) cells (only a single cell in most flowering plants but several in other seed plants) and a generative (reproductive) cell. In flowering plants the vegetative tube cell produces the pollen tube, and the generative cell divides to form the two sperm cells.
|
7 |
+
|
8 |
+
Pollen is produced in the microsporangia in the male cone of a conifer or other gymnosperm or in the anthers of an angiosperm flower. Pollen grains come in a wide variety of shapes, sizes, and surface markings characteristic of the species (see electron micrograph, right). Pollen grains of pines, firs, and spruces are winged. The smallest pollen grain, that of the forget-me-not (Myosotis spp.),[which?] is 2.5-5 µm (0.005 mm) in diameter.[4] Corn pollen grains are large, about 90–100 µm.[5] Most grass pollen is around 20-25 µm.[6]
|
9 |
+
|
10 |
+
In angiosperms, during flower development the anther is composed of a mass of cells that appear undifferentiated, except for a partially differentiated dermis. As the flower develops, four groups of sporogenous cells form within the anther. The fertile sporogenous cells are surrounded by layers of sterile cells that grow into the wall of the pollen sac. Some of the cells grow into nutritive cells that supply nutrition for the microspores that form by meiotic division from the sporogenous cells.
|
11 |
+
|
12 |
+
In a process called microsporogenesis, four haploid microspores are produced from each diploid sporogenous cell (microsporocyte, pollen mother cell or meiocyte), after meiotic division. After the formation of the four microspores, which are contained by callose walls, the development of the pollen grain walls begins. The callose wall is broken down by an enzyme called callase and the freed pollen grains grow in size and develop their characteristic shape and form a resistant outer wall called the exine and an inner wall called the intine. The exine is what is preserved in the fossil record. Two basic types of microsporogenesis are recognised, simultaneous and successive. In simultaneous microsporogenesis meiotic steps I and II are completed before cytokinesis, whereas in successive microsporogenesis cytokinesis follows. While there may be a continuum with intermediate forms, the type of microsporogenesis has systematic significance. The predominant form amongst the monocots is successive, but there are important exceptions.[7]
|
13 |
+
|
14 |
+
During microgametogenesis, the unicellular microspores undergo mitosis and develop into mature microgametophytes containing the gametes.[8] In some flowering plants,[which?] germination of the pollen grain may begin even before it leaves the microsporangium, with the generative cell forming the two sperm cells.
|
15 |
+
|
16 |
+
Except in the case of some submerged aquatic plants, the mature pollen grain has a double wall. The vegetative and generative cells are surrounded by a thin delicate wall of unaltered cellulose called the endospore or intine, and a tough resistant outer cuticularized wall composed largely of sporopollenin called the exospore or exine. The exine often bears spines or warts, or is variously sculptured, and the character of the markings is often of value for identifying genus, species, or even cultivar or individual. The spines may be less than a micron in length (spinulus, plural spinuli) referred to as spinulose (scabrate), or longer than a micron (echina, echinae) referred to as echinate. Various terms also describe the sculpturing such as reticulate, a net like appearance consisting of elements (murus, muri) separated from each other by a lumen (plural lumina). These reticulations may also be referred to as brochi.
|
17 |
+
|
18 |
+
The pollen wall protects the sperm while the pollen grain is moving from the anther to the stigma; it protects the vital genetic material from drying out and solar radiation. The pollen grain surface is covered with waxes and proteins, which are held in place by structures called sculpture elements on the surface of the grain. The outer pollen wall, which prevents the pollen grain from shrinking and crushing the genetic material during desiccation, is composed of two layers. These two layers are the tectum and the foot layer, which is just above the intine. The tectum and foot layer are separated by a region called the columella, which is composed of strengthening rods. The outer wall is constructed with a resistant biopolymer called sporopollenin.
|
19 |
+
|
20 |
+
Pollen apertures are regions of the pollen wall that may involve exine thinning or a significant reduction in exine thickness.[9] They allow shrinking and swelling of the grain caused by changes in moisture content. The process of shrinking the grain is called harmomegathy.[10] Elongated apertures or furrows in the pollen grain are called colpi (singular: colpus) or sulci (singular: sulcus). Apertures that are more circular are called pores. Colpi, sulci and pores are major features in the identification of classes of pollen.[11] Pollen may be referred to as inaperturate (apertures absent) or aperturate (apertures present). The aperture may have a lid (operculum), hence is described as operculate.[12] However the term inaperturate covers a wide range of morphological types, such as functionally inaperturate (cryptoaperturate) and omniaperturate.[7] Inaperaturate pollen grains often have thin walls, which facilitates pollen tube germination at any position.[9] Terms such as uniaperturate and triaperturate refer to the number of apertures present (one and three respectively).
|
21 |
+
|
22 |
+
The orientation of furrows (relative to the original tetrad of microspores) classifies the pollen as sulcate or colpate. Sulcate pollen has a furrow across the middle of what was the outer face when the pollen grain was in its tetrad.[13] If the pollen has only a single sulcus, it is described as monosulcate, has two sulci, as bisulcate, or more, as polysulcate.[14][15] Colpate pollen has furrows other than across the middle of the outer faces.[13] Eudicots have pollen with three colpi (tricolpate) or with shapes that are evolutionarily derived from tricolpate pollen.[16] The evolutionary trend in plants has been from monosulcate to polycolpate or polyporate pollen.[13]
|
23 |
+
|
24 |
+
Additionally, gymnosperm pollen grains often have air bladders, or vesicles, called sacci. The sacci are not actually balloons, but are sponge-like, and increase the buoyancy of the pollen grain and help keep it aloft in the wind, as most gymnosperms are anemophilous. Pollen can be monosaccate, (containing one saccus) or bisaccate (containing two sacci). Modern pine, spruce, and yellowwood trees all produce saccate pollen.[17]
|
25 |
+
|
26 |
+
The transfer of pollen grains to the female reproductive structure (pistil in angiosperms) is called pollination. This transfer can be mediated by the wind, in which case the plant is described as anemophilous (literally wind-loving). Anemophilous plants typically produce great quantities of very lightweight pollen grains, sometimes with air-sacs. Non-flowering seed plants (e.g., pine trees) are characteristically anemophilous. Anemophilous flowering plants generally have inconspicuous flowers. Entomophilous (literally insect-loving) plants produce pollen that is relatively heavy, sticky and protein-rich, for dispersal by insect pollinators attracted to their flowers. Many insects and some mites are specialized to feed on pollen, and are called palynivores.
|
27 |
+
|
28 |
+
In non-flowering seed plants, pollen germinates in the pollen chamber, located beneath the micropyle, underneath the integuments of the ovule. A pollen tube is produced, which grows into the nucellus to provide nutrients for the developing sperm cells. Sperm cells of Pinophyta and Gnetophyta are without flagella, and are carried by the pollen tube, while those of Cycadophyta and Ginkgophyta have many flagella.
|
29 |
+
|
30 |
+
When placed on the stigma of a flowering plant, under favorable circumstances, a pollen grain puts forth a pollen tube, which grows down the tissue of the style to the ovary, and makes its way along the placenta, guided by projections or hairs, to the micropyle of an ovule. The nucleus of the tube cell has meanwhile passed into the tube, as does also the generative nucleus, which divides (if it hasn't already) to form two sperm cells. The sperm cells are carried to their destination in the tip of the pollen tube. Double-strand breaks in DNA that arise during pollen tube growth appear to be efficiently repaired in the generative cell that carries the male genomic information to be passed on to the next plant generation.[18] However, the vegetative cell that is responsible for tube elongation appears to lack this DNA repair capability.[18]
|
31 |
+
|
32 |
+
Pollen's sporopollenin outer sheath affords it some resistance to the rigours of the fossilisation process that destroy weaker objects; it is also produced in huge quantities. There is an extensive fossil record of pollen grains, often disassociated from their parent plant. The discipline of palynology is devoted to the study of pollen, which can be used both for biostratigraphy and to gain information about the abundance and variety of plants alive — which can itself yield important information about paleoclimates. Also, pollen analysis has been widely used for reconstructing past changes in vegetation and their associated drivers.[19]
|
33 |
+
Pollen is first found in the fossil record in the late Devonian period,[20][21] but at that time it is indistinguishable from spores.[20] It increases in abundance until the present day.
|
34 |
+
|
35 |
+
Nasal allergy to pollen is called pollinosis, and allergy specifically to grass pollen is called hay fever. Generally, pollens that cause allergies are those of anemophilous plants (pollen is dispersed by air currents.) Such plants produce large quantities of lightweight pollen (because wind dispersal is random and the likelihood of one pollen grain landing on another flower is small), which can be carried for great distances and are easily inhaled, bringing it into contact with the sensitive nasal passages.
|
36 |
+
|
37 |
+
Pollen allergies are common in polar and temperate climate zones, where production of pollen is seasonal. In the tropics pollen production varies less by the season, and allergic reactions less.
|
38 |
+
In northern Europe, common pollens for allergies are those of birch and alder, and in late summer wormwood and different forms of hay. Grass pollen is also associated with asthma exacerbations in some people, a phenomenon termed thunderstorm asthma.[22]
|
39 |
+
|
40 |
+
In the US, people often mistakenly blame the conspicuous goldenrod flower for allergies. Since this plant is entomophilous (its pollen is dispersed by animals), its heavy, sticky pollen does not become independently airborne. Most late summer and fall pollen allergies are probably caused by ragweed, a widespread anemophilous plant.[23]
|
41 |
+
|
42 |
+
Arizona was once regarded as a haven for people with pollen allergies, although several ragweed species grow in the desert. However, as suburbs grew and people began establishing irrigated lawns and gardens, more irritating species of ragweed gained a foothold and Arizona lost its claim of freedom from hay fever.
|
43 |
+
|
44 |
+
Anemophilous spring blooming plants such as oak, birch, hickory, pecan, and early summer grasses may also induce pollen allergies. Most cultivated plants with showy flowers are entomophilous and do not cause pollen allergies.
|
45 |
+
|
46 |
+
The number of people in the United States affected by hay fever is between 20 and 40 million,[24] and such allergy has proven to be the most frequent allergic response in the nation. There are certain evidential suggestions pointing out hay fever and similar allergies to be of hereditary origin. Individuals who suffer from eczema or are asthmatic tend to be more susceptible to developing long-term hay fever.[25]
|
47 |
+
|
48 |
+
In Denmark, decades of rising temperatures cause pollen to appear earlier and in greater numbers, as well as introduction of new species such as ragweed.[26]
|
49 |
+
|
50 |
+
The most efficient way to handle a pollen allergy is by preventing contact with the material. Individuals carrying the ailment may at first believe that they have a simple summer cold, but hay fever becomes more evident when the apparent cold does not disappear. The confirmation of hay fever can be obtained after examination by a general physician.[27]
|
51 |
+
|
52 |
+
Antihistamines are effective at treating mild cases of pollinosis, this type of non-prescribed drugs includes loratadine, cetirizine and chlorpheniramine. They do not prevent the discharge of histamine, but it has been proven that they do prevent a part of the chain reaction activated by this biogenic amine, which considerably lowers hay fever symptoms.
|
53 |
+
|
54 |
+
Decongestants can be administered in different ways such as tablets and nasal sprays.
|
55 |
+
|
56 |
+
Allergy immunotherapy (AIT) treatment involves administering doses of allergens to accustom the body to pollen, thereby inducing specific long-term tolerance.[28] Allergy immunotherapy can be administered orally (as sublingual tablets or sublingual drops), or by injections under the skin (subcutaneous). Discovered by Leonard Noon and John Freeman in 1911, allergy immunotherapy represents the only causative treatment for respiratory allergies.
|
57 |
+
|
58 |
+
Most major classes of predatory and parasitic arthropods contain species that eat pollen, despite the common perception that bees are the primary pollen-consuming arthropod group. Many other Hymenoptera other than bees consume pollen as adults, though only a small number feed on pollen as larvae (including some ant larvae). Spiders are normally considered carnivores but pollen is an important source of food for several species, particularly for spiderlings, which catch pollen on their webs. It is not clear how spiderlings manage to eat pollen however, since their mouths are not large enough to consume pollen grains.[citation needed] Some predatory mites also feed on pollen, with some species being able to subsist solely on pollen, such as Euseius tularensis, which feeds on the pollen of dozens of plant species. Members of some beetle families such as Mordellidae and Melyridae feed almost exclusively on pollen as adults, while various lineages within larger families such as Curculionidae, Chrysomelidae, Cerambycidae, and Scarabaeidae are pollen specialists even though most members of their families are not (e.g., only 36 of 40,000 species of ground beetles, which are typically predatory, have been shown to eat pollen—but this is thought to be a severe underestimate as the feeding habits are only known for 1,000 species). Similarly, Ladybird beetles mainly eat insects, but many species also eat pollen, as either part or all of their diet. Hemiptera are mostly herbivores or omnivores but pollen feeding is known (and has only been well studied in the Anthocoridae). Many adult flies, especially Syrphidae, feed on pollen, and three UK syrphid species feed strictly on pollen (syrphids, like all flies, cannot eat pollen directly due to the structure of their mouthparts, but can consume pollen contents that are dissolved in a fluid).[29] Some species of fungus, including Fomes fomentarius, are able to break down grains of pollen as a secondary nutrition source that is particularly high in nitrogen.[30] Pollen may be valuable diet supplement for detritivores, providing them with nutrients needed for growth, development and maturation.[31] It was suggested that obtaining nutrients from pollen, deposited on the forest floor during periods of pollen rains, allows fungi to decompose nutritionally scarce litter.[31]
|
59 |
+
|
60 |
+
Some species of Heliconius butterflies consume pollen as adults, which appears to be a valuable nutrient source, and these species are more distasteful to predators than the non-pollen consuming species.[32][33]
|
61 |
+
|
62 |
+
Although bats, butterflies and hummingbirds are not pollen eaters per se, their consumption of nectar in flowers is an important aspect of the pollination process.
|
63 |
+
|
64 |
+
Bee pollen for human consumption is marketed as a food ingredient and as a dietary supplement. The largest constituent is carbohydrates, with protein content ranging from 7 to 35 percent depending on the plant species collected by bees.[34]
|
65 |
+
|
66 |
+
Honey produced by bees from natural sources contains pollen derived p-coumaric acid,[35] an antioxidant and natural bactericide that is also present in a wide variety of plants and plant-derived food products.[36]
|
67 |
+
|
68 |
+
The U.S. Food and Drug Administration (FDA) has not found any harmful effects of bee pollen consumption, except from the usual allergies. However, FDA does not allow bee pollen marketers in the United States to make health claims about their produce, as no scientific basis for these has ever been proven. Furthermore, there are possible dangers not only from allergic reactions but also from contaminants such as pesticides[2] and from fungi and bacteria growth related to poor storage procedures. A manufacturers's claim that pollen collecting helps the bee colonies is also controversial.[37]
|
69 |
+
|
70 |
+
Pine pollen (송화가루; Songhwa Garu) is traditionally consumed in Korea as an ingredient in sweets and beverages.[38]
|
71 |
+
|
72 |
+
The growing industries in pollen harvesting for human and bee consumption rely on harvesting pollen baskets from honey bees as they return to their hives using a pollen trap.[39] When this pollen has been tested for parasites, it has been found that a multitude of pollinator viruses and eukaryotic parasites are present in the pollen.[40][41] It is currently unclear if the parasites are introduced by the bee that collected the pollen or if it is from contamination to the flower.[41][42] Though this is not likely to pose a risk to humans, it is a major issue for the bumblebee rearing industry that relies on thousands of tonnes of honey bee collected pollen per year.[43] Several sterilization methods have been employed, though no method has been 100% effective at sterilizing, without reducing the nutritional value, of the pollen [44]
|
73 |
+
|
74 |
+
In forensic biology, pollen can tell a lot about where a person or object has been, because regions of the world, or even more particular locations such a certain set of bushes, will have a distinctive collection of pollen species.[45] Pollen evidence can also reveal the season in which a particular object picked up the pollen.[46] Pollen has been used to trace activity at mass graves in Bosnia,[47] catch a burglar who brushed against a Hypericum bush during a crime,[48] and has even been proposed as an additive for bullets to enable tracking them.[49]
|
75 |
+
|
76 |
+
In some Native American religions, pollen was used in prayers and rituals to symbolize life and renewal by sanctifying objects, dancing grounds, trails, and sandpaintings. It may also be sprinkled over heads or in mouths. Many Navajo people believed the body became holy when it traveled over a trail sprinkled with pollen.[50]
|
77 |
+
|
78 |
+
For agricultural research purposes, assessing the viability of pollen grains can be necessary and illuminating. A very common, efficient method to do so is known as Alexander's stain.[51] This differential stain consists of ethanol, malachite green, distilled water, glycerol, phenol, chloral hydrate, acid fuchsin, orange g, and glacial acetic acid.[52] In angiosperms and gymnosperms non-aborted pollen grain will appear red or pink, and aborted pollen grains will appear blue or slightly green.
|
en/4704.html.txt
ADDED
@@ -0,0 +1,100 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Pollination is the transfer of pollen from a male part of a plant to a female part of a plant, later enabling fertilisation and the production of seeds, most often by an animal or by wind.[1] Pollinating agents are animals such as insects, birds, and bats; water; wind; and even plants themselves, when self-pollination occurs within a closed flower. Pollination often occurs within a species. When pollination occurs between species it can produce hybrid offspring in nature and in plant breeding work.
|
2 |
+
|
3 |
+
In angiosperms, after the pollen grain (gametophyte) has landed on the stigma, where it germinates and develops a pollen tube which grows down the style until it reaches an ovary. Its two gametes travel down the tube to where the gametophyte(s) containing the female gametes are held within the carpel. After entering an ovum cell through the micropyle, one male nucleus fuses with the polar bodies to produce the endosperm tissues, while the other fuses with the ovule to produce the embryo.[2][3] Hence the term: "double fertilization". This process would result in the production of a seed made of both nutritious tissues and embryo.
|
4 |
+
|
5 |
+
In gymnosperms, the ovule is not contained in a carpel, but exposed on the surface of a dedicated support organ, such as the scale of a cone, so that the penetration of carpel tissue is unnecessary. Details of the process vary according to the division of gymnosperms in question. Two main modes of fertilization are found in gymnosperms. Cycads and Ginkgo have motile sperm that swim directly to the egg inside the ovule, whereas conifers and gnetophytes have sperm that are unable to swim but are conveyed to the egg along a pollen tube.
|
6 |
+
|
7 |
+
The study of pollination spans many disciplines, such as botany, horticulture, entomology, and ecology. The pollination process as an interaction between flower and pollen vector was first addressed in the 18th century by Christian Konrad Sprengel. It is important in horticulture and agriculture, because fruiting is dependent on fertilization: the result of pollination. The study of pollination by insects is known as anthecology. There are also studies in economics that look at the positive and negative benefits of pollination, focused on bees, and how the process affects the pollinators themselves.
|
8 |
+
|
9 |
+
Pollen germination has three stages; hydration, activation and pollen tube emergence. The pollen grain is severely dehydrated so that its mass is reduced enabling it to be more easily transported from flower to flower. Germination only takes place after rehydration, ensuring that premature germination does not take place in the anther. Hydration allows the plasma membrane of the pollen grain to reform into its normal bilayer organization providing an effective osmotic membrane. Activation involves the development of actin filaments throughout the cytoplasm of the cell, which eventually become concentrated at the point from which the pollen tube will emerge. Hydration and activation continue as the pollen tube begins to grow.[4]
|
10 |
+
|
11 |
+
In conifers, the reproductive structures are borne on cones. The cones are either pollen cones (male) or ovulate cones (female), but some species are monoecious and others dioecious. A pollen cone contains hundreds of microsporangia carried on (or borne on) reproductive structures called sporophylls. Spore mother cells in the microsporangia divide by meiosis to form haploid microspores that develop further by two mitotic divisions into immature male gametophytes (pollen grains). The four resulting cells consist of a large tube cell that forms the pollen tube, a generative cell that will produce two sperm by mitosis, and two prothallial cells that degenerate. These cells comprise a very reduced microgametophyte, that is contained within the resistant wall of the pollen grain.[5][6]
|
12 |
+
|
13 |
+
The pollen grains are dispersed by the wind to the female, ovulate cone that is made up of many overlapping scales (sporophylls, and thus megasporophylls), each protecting two ovules, each of which consists of a megasporangium (the nucellus) wrapped in two layers of tissue, the integument and the cupule, that were derived from highly modified branches of ancestral gymnosperms. When a pollen grain lands close enough to the tip of an ovule, it is drawn in through the micropyle ( a pore in the integuments covering the tip of the ovule) often by means of a drop of liquid known as a pollination drop. The pollen enters a pollen chamber close to the nucellus, and there it may wait for a year before it germinates and forms a pollen tube that grows through the wall of the megasporangium (=nucellus) where fertilisation takes place. During this time, the megaspore mother cell divides by meiosis to form four haploid cells, three of which degenerate. The surviving one develops as a megaspore and divides repeatedly to form an immature female gametophyte (egg sac). Two or three archegonia containing an egg then develop inside the gametophyte. Meanwhile, in the spring of the second year two sperm cells are produced by mitosis of the body cell of the male gametophyte. The pollen tube elongates and pierces and grows through the megasporangium wall and delivers the sperm cells to the female gametophyte inside. Fertilisation takes place when the nucleus of one of the sperm cells enters the egg cell in the megagametophyte's archegonium.[6]
|
14 |
+
|
15 |
+
In flowering plants, the anthers of the flower produce microspores by meiosis. These undergo mitosis to form male gametophytes, each of which contains two haploid cells. Meanwhile, the ovules produce megaspores by meiosis, further division of these form the female gametophytes, which are very strongly reduced, each consisting only of a few cells, one of which is the egg. When a pollen grain adheres to the stigma of a carpel it germinates, developing a pollen tube that grows through the tissues of the style, entering the ovule through the micropyle. When the tube reaches the egg sac, two sperm cells pass through it into the female gametophyte and fertilisation takes place.[5]
|
16 |
+
|
17 |
+
Pollination may be biotic or abiotic. Biotic pollination relies on living pollinators to move the pollen from one flower to another. Abiotic pollination relies on wind, water or even rain. About 80% of angiosperms rely on biotic pollination.[7]
|
18 |
+
|
19 |
+
Abiotic pollination uses nonliving methods such as wind and water to move pollen from one flower to another. This allows the plant to spend energy directly on pollen rather than on attracting pollinators with flowers and nectar.
|
20 |
+
|
21 |
+
Some 98% of abiotic pollination is anemophily, pollination by wind. This probably arose from insect pollination, most likely due to changes in the environment or the availability of pollinators.[8][9][10] The transfer of pollen is more efficient than previously thought; wind pollinated plants have developed to have specific heights, in addition to specific floral, stamen and stigma positions that promote effective pollen dispersal and transfer.[11]
|
22 |
+
|
23 |
+
Pollination by water, hydrophily, uses water to transport pollen, sometimes as whole anthers; these can travel across the surface of the water to carry dry pollen from one flower to another.[12] In Vallisneria spiralis, an unopened male flower floats to the surface of the water, and, upon reaching the surface, opens up and the fertile anthers project forward. The female flower, also floating, has its stigma protected from the water, while its sepals are slightly depressed into the water, allowing the male flowers to tumble in.[12]
|
24 |
+
|
25 |
+
Rain pollination is used by a small percentage of plants. Heavy rain discourages insect pollination and damages unprotected flowers, but can itself disperse pollen of suitably adapted plants, such as Ranunculus flammula, Narthecium ossifragum, and Caltha palustris.[13] In these plants, excess rain drains allowing the floating pollen to come in contact with the stigma.[13] In rain pollination in orchids, the rain allows for the anther cap to be removed, allowing for the pollen to be exposed. After exposure, raindrops causes the pollen to be shot upward, when the stipe pulls them back, and then fall into the cavity of the stigma. Thus, for the orchid Acampe rigida, this allows the plant to self-pollinate, which is useful when biotic pollinators in the environment have decreased.[14]
|
26 |
+
|
27 |
+
It is possible for a plant have varying pollination methods, including both biotic and abiotic pollination. The orchid Oeceoclades maculata uses both rain and butterflies, depending on its environmental conditions.[15]
|
28 |
+
|
29 |
+
More commonly, pollination involves pollinators (also called pollen vectors): organisms that carry or move the pollen grains from the anther of one flower to the receptive part of the carpel or pistil (stigma) of another.[16] Between 100,000 and 200,000 species of animal act as pollinators of the world's 250,000 species of flowering plant.[17] The majority of these pollinators are insects, but about 1,500 species of birds and mammals visit flowers and may transfer pollen between them. Besides birds and bats which are the most frequent visitors, these include monkeys, lemurs, squirrels, rodents and possums.[17]
|
30 |
+
|
31 |
+
Entomophily, pollination by insects, often occurs on plants that have developed colored petals and a strong scent to attract insects such as, bees, wasps and occasionally ants (Hymenoptera), beetles (Coleoptera), moths and butterflies (Lepidoptera), and flies (Diptera). The existence of insect pollination dates back to the dinosaur era.[18]
|
32 |
+
|
33 |
+
In zoophily, pollination is performed by vertebrates such as birds and bats, particularly, hummingbirds, sunbirds, spiderhunters, honeyeaters, and fruit bats. Ornithophily or bird pollination is the pollination of flowering plants by birds. Chiropterophily or bat pollination is the pollination of flowering plants by bats. Plants adapted to use bats or moths as pollinators typically have white petals, strong scent and flower at night, whereas plants that use birds as pollinators tend to produce copious nectar and have red petals.[19]
|
34 |
+
|
35 |
+
Insect pollinators such as honey bees (Apis spp.),[20]
|
36 |
+
bumblebees (Bombus spp.),[21][22] and butterflies (e.g., Thymelicus flavus)[23] have been observed to engage in flower constancy, which means they are more likely to transfer pollen to other conspecific plants.[24] This can be beneficial for the pollinators, as flower constancy prevents the loss of pollen during interspecific flights and pollinators from clogging stigmas with pollen of other flower species. It also improves the probability that the pollinator will find productive flowers easily accessible and recognisable by familiar clues.[25]
|
37 |
+
|
38 |
+
Some flowers have specialized mechanisms to trap pollinators to increase effectiveness.[26] Other flowers will attract pollinators by odor. For example, bee species such as Euglossa cordata are attracted to orchids this way, and it has been suggested that the bees will become intoxicated during these visits to the orchid flowers, which last up to 90 minutes.[27] However, in general, plants that rely on pollen vectors tend to be adapted to their particular type of vector, for example day-pollinated species tend to be brightly coloured, but if they are pollinated largely by birds or specialist mammals, they tend to be larger and have larger nectar rewards than species that are strictly insect-pollinated. They also tend to spread their rewards over longer periods, having long flowering seasons; their specialist pollinators would be likely to starve if the pollination season were too short.[26]
|
39 |
+
|
40 |
+
As for the types of pollinators, reptile pollinators are known, but they form a minority in most ecological situations. They are most frequent and most ecologically significant in island systems, where insect and sometimes also bird populations may be unstable and less species-rich. Adaptation to a lack of animal food and of predation pressure, might therefore favour reptiles becoming more herbivorous and more inclined to feed on pollen and nectar.[28] Most species of lizards in the families that seem to be significant in pollination seem to carry pollen only incidentally, especially the larger species such as Varanidae and Iguanidae, but especially several species of the Gekkonidae are active pollinators, and so is at least one species of the Lacertidae, Podarcis lilfordi, which pollinates various species, but in particular is the major pollinator of Euphorbia dendroides on various Mediterranean islands.[29]
|
41 |
+
|
42 |
+
Mammals are not generally thought of as pollinators, but some rodents, bats and marsupials are significant pollinators and some even specialise in such activities. In South Africa certain species of Protea (in particular Protea humiflora, P. amplexicaulis, P. subulifolia, P. decurrens and P. cordata) are adapted to pollination by rodents (particularly Cape Spiny Mouse, Acomys subspinosus)[30] and elephant shrews (Elephantulus species).[31] The flowers are borne near the ground, are yeasty smelling, not colourful, and sunbirds reject the nectar with its high xylose content. The mice apparently can digest the xylose and they eat large quantities of the pollen.[32] In Australia pollination by flying, gliding and earthbound mammals has been demonstrated.[33] Examples of pollen vectors include many species of wasps, that transport pollen of many plant species, being potential or even efficient pollinators.[34]
|
43 |
+
|
44 |
+
Pollination can be accomplished by cross-pollination or by self-pollination:
|
45 |
+
|
46 |
+
Geranium incanum, like most geraniums and pelargoniums, sheds its anthers, sometimes its stamens as well, as a barrier to self-pollination. This young flower is about to open its anthers, but has not yet fully developed its pistil.
|
47 |
+
|
48 |
+
These Geranium incanum flowers have opened their anthers, but not yet their stigmas. Note the change of colour that signals to pollinators that it is ready for visits.
|
49 |
+
|
50 |
+
This Geranium incanum flower has shed its stamens, and deployed the tips of its pistil without accepting pollen from its own anthers. (It might of course still receive pollen from younger flowers on the same plant.)
|
51 |
+
|
52 |
+
An estimated 48.7% of plant species are either dioecious or self-incompatible obligate out-crossers.[40] It is also estimated that about 42% of flowering plants have a mixed mating system in nature.[41] In the most common kind of mixed mating system, individual plants produce a single type of flower and fruits may contain self-pollinated, out-crossed or a mixture of progeny types.
|
53 |
+
|
54 |
+
Pollination also requires consideration of pollenizers, the plants that serve as the pollen source for other plants. Some plants are self-compatible (self-fertile) and can pollinate and fertilize themselves. Other plants have chemical or physical barriers to self-pollination.
|
55 |
+
|
56 |
+
In agriculture and horticulture pollination management, a good pollenizer is a plant that provides compatible, viable and plentiful pollen and blooms at the same time as the plant that is to be pollinated or has pollen that can be stored and used when needed to pollinate the desired flowers. Hybridization is effective pollination between flowers of different species, or between different breeding lines or populations. see also Heterosis.
|
57 |
+
|
58 |
+
Peaches are considered self-fertile because a commercial crop can be produced without cross-pollination, though cross-pollination usually gives a better crop. Apples are considered self-incompatible, because a commercial crop must be cross-pollinated. Many commercial fruit tree varieties are grafted clones, genetically identical. An orchard block of apples of one variety is genetically a single plant. Many growers now consider this a mistake. One means of correcting this mistake is to graft a limb of an appropriate pollenizer (generally a variety of crabapple) every six trees or so.[citation needed]
|
59 |
+
|
60 |
+
The first fossil record for abiotic pollination is from fern-like plants in the late Carboniferous period. Gymnosperms show evidence for biotic pollination as early as the Triassic period. Many fossilized pollen grains show characteristics similar to the biotically dispersed pollen today. Furthermore, the gut contents, wing structures, and mouthpart morphology of fossilized beetles and flies suggest that they acted as early pollinators. The association between beetles and angiosperms during the early Cretaceous period led to parallel radiations of angiosperms and insects into the late Cretaceous. The evolution of nectaries in late Cretaceous flowers signals the beginning of the mutualism between hymenopterans and angiosperms.
|
61 |
+
|
62 |
+
Bees provide a good example of the mutualism that exists between hymenopterans and angiosperms. Flowers provide bees with nectar (an energy source) and pollen (a source of protein). When bees go from flower to flower collecting pollen they are also depositing pollen grains onto the flowers, thus pollinating them. While pollen and nectar, in most cases, are the most notable reward attained from flowers, bees also visit flowers for other resources such as oil, fragrance, resin and even waxes.[42] It has been estimated that bees originated with the origin or diversification of angiosperms.[43] In addition, cases of coevolution between bee species and flowering plants have been illustrated by specialized adaptations. For example, long legs are selected for in Rediviva neliana, a bee that collects oil from Diascia capsularis, which have long spur lengths that are selected for in order to deposit pollen on the oil-collecting bee, which in turn selects for even longer legs in R. neliana and again longer spur length in D. capsularis is selected for, thus, continually driving each other's evolution.[44]
|
63 |
+
|
64 |
+
The most essential staple food crops on the planet, like wheat, maize, rice, soybeans and sorghum[46][47] are wind pollinated or self pollinating. When considering the top 15 crops contributing to the human diet globally in 2013, slightly over 10% of the total human diet of plant crops (211 out of 1916 kcal/person/day) is dependent upon insect pollination.[46]
|
65 |
+
|
66 |
+
Pollination management is a branch of agriculture that seeks to protect and enhance present pollinators and often involves the culture and addition of pollinators in monoculture situations, such as commercial fruit orchards. The largest managed pollination event in the world is in Californian almond orchards, where nearly half (about one million hives) of the US honey bees are trucked to the almond orchards each spring. New York's apple crop requires about 30,000 hives; Maine's blueberry crop uses about 50,000 hives each year. The US solution to the pollinator shortage, so far, has been for commercial beekeepers to become pollination contractors and to migrate. Just as the combine harvesters follow the wheat harvest from Texas to Manitoba, beekeepers follow the bloom from south to north, to provide pollination for many different crops.[citation needed]
|
67 |
+
|
68 |
+
In America, bees are brought to commercial plantings of cucumbers, squash, melons, strawberries, and many other crops. Honey bees are not the only managed pollinators: a few other species of bees are also raised as pollinators. The alfalfa leafcutter bee is an important pollinator for alfalfa seed in western United States and Canada. Bumblebees are increasingly raised and used extensively for greenhouse tomatoes and other crops.
|
69 |
+
|
70 |
+
The ecological and financial importance of natural pollination by insects to agricultural crops, improving their quality and quantity, becomes more and more appreciated and has given rise to new financial opportunities. The vicinity of a forest or wild grasslands with native pollinators near agricultural crops, such as apples, almonds or coffee can improve their yield by about 20%. The benefits of native pollinators may result in forest owners demanding payment for their contribution in the improved crop results – a simple example of the economic value of ecological services. Farmers can also raise native crops in order to promote native bee pollinator species as shown with L. vierecki in Delaware[48] and L. leucozonium in southwest Virginia.[49]
|
71 |
+
|
72 |
+
The American Institute of Biological Sciences reports that native insect pollination saves the United States agricultural economy nearly an estimated $3.1 billion annually through natural crop production;[50] pollination produces some $40 billion worth of products annually in the United States alone.[51]
|
73 |
+
|
74 |
+
Pollination of food crops has become an environmental issue, due to two trends. The trend to monoculture means that greater concentrations of pollinators are needed at bloom time than ever before, yet the area is forage poor or even deadly to bees for the rest of the season. The other trend is the decline of pollinator populations, due to pesticide misuse and overuse, new diseases and parasites of bees, clearcut logging, decline of beekeeping, suburban development, removal of hedges and other habitat from farms, and public concern about bees. Widespread aerial spraying for mosquitoes due to West Nile fears is causing an acceleration of the loss of pollinators.
|
75 |
+
|
76 |
+
In some situations, farmers or horticulturists may aim to restrict natural pollination to only permit breeding with the preferred individuals plants. This may be achieved through the use of pollination bags.
|
77 |
+
|
78 |
+
In some instances growers’ demand for beehives far exceeds the available supply. The number of managed beehives in the US has steadily declined from close to 6 million after WWII, to less than 2.5 million today. In contrast, the area dedicated to growing bee-pollinated crops has grown over 300% in the same time period. Additionally, in the past five years there has been a decline in winter managed beehives, which has reached an unprecedented rate of colony losses at near 30%.[52][53][54][55] At present, there is an enormous demand for beehive rentals that cannot always be met. There is a clear need across the agricultural industry for a management tool to draw pollinators into cultivations and encourage them to preferentially visit and pollinate the flowering crop. By attracting pollinators like honey bees and increasing their foraging behavior, particularly in the center of large plots, we can increase grower returns and optimize yield from their plantings. ISCA Technologies,[56] from Riverside California, created a semiochemical formulation called SPLAT Bloom, that modifies the behavior of honey bees, inciting them to visit flowers in every portion of the field.
|
79 |
+
|
80 |
+
Loss of pollinators, also known as Pollinator decline (of which colony collapse disorder is perhaps the most well known) has been noticed in recent years. These loss of pollinators have caused a disturbance in early plant regeneration processes such as seed dispersal and pollination. Early processes of plant regeneration greatly depend on plant-animal interactions and because these interactions are interrupted, biodiversity and ecosystem functioning are threatened.[57] Pollination by animals aids in the genetic variability and diversity within plants because it allows for out-crossing instead for self-crossing. Without this genetic diversity there would be a lack of traits for natural selection to act on for the survival of the plant species. Seed dispersal is also important for plant fitness because it allows plants the ability to expand their populations. More than that, it permits plants to escape environments that have changed and have become difficult to reside in. All of these factors show the importance of pollinators for plants, which are a significant part of the foundation for a stable ecosystem. If only a few species of plants depended on Loss of pollinators is especially devastating because there are so many plant species rely on them. More than 87.5% of angiosperms, over 75% of tropical tree species, and 30-40% of tree species in temperate regions depend on pollination and seed dispersal.[57]
|
81 |
+
|
82 |
+
Factors that contribute to pollinator decline include habitat destruction, pesticide, parasitism/diseases, and climate change.[58] The more destructive forms of human disturbances are land use changes such as fragmentation, selective logging, and the conversion to secondary forest habitat.[57] Defaunation of frugivores is also an important driver.[59] These alterations are especially harmful due to the sensitivity of the pollination process of plants.[57] Research on tropical palms found that defaunation has caused a decline in seed dispersal, which causes a decrease in genetic variability in this species.[59] Habitat destruction such as fragmentation and selective logging remove areas that are most optimal for the different types of pollinators, which removes pollinators food resources, nesting sites, and leads to isolation of populations.[60] The effect of pesticides on pollinators has been debated because it is difficult to determine that a single pesticide is the cause as opposed to a mixture or other threats.[60] Whether exposure alone causes damages, or if the duration and potency are also factors is unknown.[60] However, insecticides have negative effects, as in the case of neonicotinoids that harm bee colonies. Many researchers believe it is the synergistic effects of these factors which are ultimately detrimental to pollinator populations.[58]
|
83 |
+
|
84 |
+
The most known and understood pollinator, bees, have been used as the prime example of the decline in pollinators. Bees are essential in the pollination of agricultural crops and wild plants and are one of the main insects that perform this task.[61] Out of the bees species, the honey bee or Apis mellifera has been studied the most and in the United States, there has been a loss of 59% of colonies from 1947 to 2005.[61] The decrease in populations of the honey bee have been attributed to pesticides, genetically modified crops, fragmentation, parasites and diseases that have been introduced.[62] There has been a focus on neonicotinoids effects on honey bee populations. Neonicotinoids insecticides have been used due to its low mammalian toxicity, target specificity, low application rates, and broad spectrum activity. However, the insecticides are able to make its way throughout the plant, which includes the pollen and nectar. Due to this, it has been shown to effect on the nervous system and colony relations in the honey bee populations.[62]
|
85 |
+
|
86 |
+
Butterflies too have suffered due to these modifications. Butterflies are helpful ecological indicators since they are sensitive to changes within the environment like the season, altitude, and above all, human impact on the environment. Butterfly populations were higher within the natural forest and were lower in open land. The reason for the difference in density is the fact that in open land the butterflies would be exposed to desiccation and predation. These open regions are caused by habitat destruction like logging for timber, livestock grazing, and firewood collection. Due to this destruction, butterfly species' diversity can decrease and it is known that there is a correlation in butterfly diversity and plant diversity.[63]
|
87 |
+
|
88 |
+
Besides the imbalance of the ecosystem caused by the decline in pollinators, it may jeopardise food security. Pollination is necessary for plants to continue their populations and 3/4 of the plant species that contribute to the world's food supply are plants that require pollinators.[64] Insect pollinators, like bees, are large contributors to crop production, over 200 billion dollars worth of crop species are pollinated by these insects.[60] Pollinators are also essential because they improve crop quality and increase genetic diversity, which is necessary in producing fruit with nutritional value and various flavors.[65] Crops that do not depend on animals for pollination but on the wind or self-pollination, like corn and potatoes, have doubled in production and make up a large part of the human diet but do not provide the micronutrients that are needed.[66] The essential nutrients that are necessary in the human diet are present in plants that rely on animal pollinators.[66] There have been issues in vitamin and mineral deficiencies and it is believed that if pollinator populations continue to decrease these deficiencies will become even more prominent.[65]
|
89 |
+
|
90 |
+
Wild pollinators often visit a large number of plant species and plants are visited by a large number of pollinator species. All these relations together form a network of interactions between plants and pollinators. Surprising similarities were found in the structure of networks consisting out of the interactions between plants and pollinators. This structure was found to be similar in very different ecosystems on different continents, consisting of entirely different species.[67]
|
91 |
+
|
92 |
+
The structure of plant-pollinator networks may have large consequences for the way in which pollinator communities respond to increasingly harsh conditions. Mathematical models, examining the consequences of this network structure for the stability of pollinator communities suggest that the specific way in which plant-pollinator networks are organized minimizes competition between pollinators[68] and may even lead to strong indirect facilitation between pollinators when conditions are harsh.[69] This means that pollinator species together can survive under harsh conditions. But it also means that pollinator species collapse simultaneously when conditions pass a critical point. This simultaneous collapse occurs, because pollinator species depend on each other when surviving under difficult conditions.[69]
|
93 |
+
|
94 |
+
Such a community-wide collapse, involving many pollinator species, can occur suddenly when increasingly harsh conditions pass a critical point and recovery from such a collapse might not be easy. The improvement in conditions needed for pollinators to recover, could be substantially larger than the improvement needed to return to conditions at which the pollinator community collapsed.[69]
|
95 |
+
|
96 |
+
While there are 200,000 - 350,000 different species of animals that help pollination, bees are responsible for majority of the pollination for consumed crops, providing between $235 and $577 US billion of benefits to global food production.[70] Since the early 1900s, beekeepers in the United States started renting out their colonies to farmers to increase the farmer's crop yields, earning additional revenue from providing privatized pollination. As of 2016, 41% of an average US beekeeper's revenue comes from providing such pollination service to farmers, making it the biggest proportion of their income, with the rest coming from sales of honey, beeswax, government subsidy, etc.[71] This is an example of how a positive externality, pollination of crops from beekeeping and honey-making, was successfully accounted for and incorporated into the overall market for agriculture. On top of assisting food production, pollination service provide beneficial spillovers as bees germinate not only the crops, but also other plants around the area that they are set loose to pollinate, increasing biodiversity for the local ecosystem.[72] There is even further spillover as biodiversity increases ecosystem resistance for wildlife and crops.[73] Due to their role of pollination in crop production, commercial honeybees are considered to be livestock by the US Department of Agriculture. The impact of pollination varies by crop. For example, almond production in the United States, an $11 billion industry based almost exclusively in the state of California, is heavily dependent on bees for pollination of almond trees. Almond industry uses up to 82% of the services in the pollination market. Each February, around 60% of the all bee colonies in the US are moved to California's Central Valley.[74]
|
97 |
+
|
98 |
+
Over the past decade, beekeepers across the US have reported that the mortality rate of their bee colonies has stayed constant at about 30% every year, making the deaths an expected cost of business for the beekeepers. While the exact cause of this phenomenon is unknown, according to the US Department of Agriculture Colony Collapse Disorder Progress Report it can be traced to factors such as pollution, pesticides, and pathogens from evidences found in areas of the colonies affected and the colonies themselves.[75] Pollution and pesticides are detrimental to the health of the bees and their colonies as the bees' ability to pollinate and return to their colonies are great greatly compromised.[76] Moreover, California's Central Valley is determined by the World Health Organization as the location of country's worst air pollution.[77] Almond pollinating bees, approximately 60% of the bees in the US as mentioned above, will be mixed with bees from thousands of other hives provided by different beekeepers, making them exponentially susceptible to diseases and mites that any of them could be carrying.[74] The deaths do not stop at commercial honeybees as there is evidence of significant pathogen spillover to other pollinators including wild bumble bees, infecting up to 35-100% of wild bees within 2 km radius of commercial pollination.[78] Honeybees infected by RNA virus will leave traces of the virus on pollen, which leads to exposure to other rental honeybees and wild pollinators including those that are not bees. The infected bees will return to their colonies and pass it onto the queen bee, who will lay virus-infected eggs, compromising the health of the colony.[79] The negative externality of private pollination services is the decline of biodiversity through the deaths of commercial and wild bees.
|
99 |
+
|
100 |
+
Despite losing about a third of their workforce every year, beekeepers continue to rent out their bees to almond farms due to the high pay from the almond industry. In 2016, a colony rented out for almond pollination gave beekeepers an income of $165 per colony rented, around three times from average of other crops that use the pollination rental service.[80] However, a recent study published in Oxford Academic's Journal of Economic Entomology found that once the costs for maintaining bees specifically for almond pollination, including overwintering, summer management, and the replacement dying bees are considered, almond pollination is barely or not profitable for average beekeepers.[81]
|
en/4705.html.txt
ADDED
@@ -0,0 +1,126 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Pollution is the introduction of contaminants into the natural environment that cause adverse change.[1] Pollution can take the form of chemical substances or energy, such as noise, heat or light. Pollutants, the components of pollution, can be either foreign substances/energies or naturally occurring contaminants. Pollution is often classed as point source or nonpoint source pollution. In 2015, pollution killed 9 million people in the world.[2][3]
|
4 |
+
|
5 |
+
Major forms of pollution include: Air pollution, light pollution, littering, noise pollution, plastic pollution, soil contamination, radioactive contamination, thermal pollution, visual pollution, water pollution.
|
6 |
+
|
7 |
+
Air pollution has always accompanied civilizations. Pollution started from prehistoric times, when man created the first fires. According to a 1983 article in the journal Science, "soot" found on ceilings of prehistoric caves provides ample evidence of the high levels of pollution that was associated with inadequate ventilation of open fires."[4] Metal forging appears to be a key turning point in the creation of significant air pollution levels outside the home. Core samples of glaciers in Greenland indicate increases in pollution associated with Greek, Roman, and Chinese metal production.[5]
|
8 |
+
|
9 |
+
The burning of coal and wood, and the presence of many horses in concentrated areas made the cities the primary sources of pollution. The Industrial Revolution brought an infusion of untreated chemicals and wastes into local streams that served as the water supply. King Edward I of England banned the burning of sea-coal by proclamation in London in 1272, after its smoke became a problem;[6][7] the fuel was so common in England that this earliest of names for it was acquired because it could be carted away from some shores by the wheelbarrow.
|
10 |
+
|
11 |
+
It was the Industrial Revolution that gave birth to environmental pollution as we know it today. London also recorded one of the earlier extreme cases of water quality problems with the Great Stink on the Thames of 1858, which led to construction of the London sewerage system soon afterward. Pollution issues escalated as population growth far exceeded viability of neighborhoods to handle their waste problem. Reformers began to demand sewer systems and clean water.[8]
|
12 |
+
|
13 |
+
In 1870, the sanitary conditions in Berlin were among the worst in Europe. August Bebel recalled conditions before a modern sewer system was built in the late 1870s:
|
14 |
+
|
15 |
+
Waste-water from the houses collected in the gutters running alongside the curbs and emitted a truly fearsome smell. There were no public toilets in the streets or squares. Visitors, especially women, often became desperate when nature called. In the public buildings the sanitary facilities were unbelievably primitive....As a metropolis, Berlin did not emerge from a state of barbarism into civilization until after 1870."[9]
|
16 |
+
|
17 |
+
The primitive conditions were intolerable for a world national capital, and the Imperial German government brought in its scientists, engineers, and urban planners to not only solve the deficiencies, but to forge Berlin as the world's model city. A British expert in 1906 concluded that Berlin represented "the most complete application of science, order and method of public life," adding "it is a marvel of civic administration, the most modern and most perfectly organized city that there is."[10]
|
18 |
+
|
19 |
+
The emergence of great factories and consumption of immense quantities of coal gave rise to unprecedented air pollution and the large volume of industrial chemical discharges added to the growing load of untreated human waste. Chicago and Cincinnati were the first two American cities to enact laws ensuring cleaner air in 1881. Pollution became a major issue in the United States in the early twentieth century, as progressive reformers took issue with air pollution caused by coal burning, water pollution caused by bad sanitation, and street pollution caused by the 3 million horses who worked in American cities in 1900, generating large quantities of urine and manure. As historian Martin Melosi notes, the generation that first saw automobiles replacing the horses saw cars as "miracles of cleanliness".[11] By the 1940s, however, automobile-caused smog was a major issue in Los Angeles.[12]
|
20 |
+
|
21 |
+
Other cities followed around the country until early in the 20th century, when the short lived Office of Air Pollution was created under the Department of the Interior. Extreme smog events were experienced by the cities of Los Angeles and Donora, Pennsylvania in the late 1940s, serving as another public reminder.[13]
|
22 |
+
|
23 |
+
Air pollution would continue to be a problem in England, especially later during the industrial revolution, and extending into the recent past with the Great Smog of 1952. Awareness of atmospheric pollution spread widely after World War II, with fears triggered by reports of radioactive fallout from atomic warfare and testing.[14] Then a non-nuclear event – the Great Smog of 1952 in London – killed at least 4000 people.[15] This prompted some of the first major modern environmental legislation: the Clean Air Act of 1956.
|
24 |
+
|
25 |
+
Pollution began to draw major public attention in the United States between the mid-1950s and early 1970s, when Congress passed the Noise Control Act, the Clean Air Act, the Clean Water Act, and the National Environmental Policy Act.[16]
|
26 |
+
|
27 |
+
Severe incidents of pollution helped increase consciousness. PCB dumping in the Hudson River resulted in a ban by the EPA on consumption of its fish in 1974. National news stories in the late 1970s – especially the long-term dioxin contamination at Love Canal starting in 1947 and uncontrolled dumping in Valley of the Drums – led to the Superfund legislation of 1980.[17] The pollution of industrial land gave rise to the name brownfield, a term now common in city planning.
|
28 |
+
|
29 |
+
The development of nuclear science introduced radioactive contamination, which can remain lethally radioactive for hundreds of thousands of years. Lake Karachay – named by the Worldwatch Institute as the "most polluted spot" on earth – served as a disposal site for the Soviet Union throughout the 1950s and 1960s. Chelyabinsk, Russia, is considered the "Most polluted place on the planet".[18]
|
30 |
+
|
31 |
+
Nuclear weapons continued to be tested in the Cold War, especially in the earlier stages of their development. The toll on the worst-affected populations and the growth since then in understanding about the critical threat to human health posed by radioactivity has also been a prohibitive complication associated with nuclear power. Though extreme care is practiced in that industry, the potential for disaster suggested by incidents such as those at Three Mile Island, Chernobyl, and Fukushima pose a lingering specter of public mistrust. Worldwide publicity has been intense on those disasters.[19] Widespread support for test ban treaties has ended almost all nuclear testing in the atmosphere.[20]
|
32 |
+
|
33 |
+
International catastrophes such as the wreck of the Amoco Cadiz oil tanker off the coast of Brittany in 1978 and the Bhopal disaster in 1984 have demonstrated the universality of such events and the scale on which efforts to address them needed to engage. The borderless nature of atmosphere and oceans inevitably resulted in the implication of pollution on a planetary level with the issue of global warming. Most recently the term persistent organic pollutant (POP) has come to describe a group of chemicals such as PBDEs and PFCs among others. Though their effects remain somewhat less well understood owing to a lack of experimental data, they have been detected in various ecological habitats far removed from industrial activity such as the Arctic, demonstrating diffusion and bioaccumulation after only a relatively brief period of widespread use.
|
34 |
+
|
35 |
+
A much more recently discovered problem is the Great Pacific Garbage Patch, a huge concentration of plastics, chemical sludge and other debris which has been collected into a large area of the Pacific Ocean by the North Pacific Gyre. This is a less well known pollution problem than the others described above, but nonetheless has multiple and serious consequences such as increasing wildlife mortality, the spread of invasive species and human ingestion of toxic chemicals. Organizations such as 5 Gyres have researched the pollution and, along with artists like Marina DeBris, are working toward publicizing the issue.
|
36 |
+
|
37 |
+
Pollution introduced by light at night is becoming a global problem, more severe in urban centres, but nonetheless contaminating also large territories, far away from towns.[21]
|
38 |
+
|
39 |
+
Growing evidence of local and global pollution and an increasingly informed public over time have given rise to environmentalism and the environmental movement, which generally seek to limit human impact on the environment.
|
40 |
+
|
41 |
+
The major forms of pollution are listed below along with the particular contaminant relevant to each of them:
|
42 |
+
|
43 |
+
A pollutant is a waste material that pollutes air, water, or soil. Three factors determine the severity of a pollutant: its chemical nature, the concentration, the area affected and the persistence.
|
44 |
+
|
45 |
+
Pollution has a cost.[23][24][25] Manufacturing activities that cause air pollution impose health and clean-up costs on the whole of society, whereas the neighbors of an individual who chooses to fire-proof his home may benefit from a reduced risk of a fire spreading to their own homes. A manufacturing activity that causes air pollution is an example of a negative externality in production. A negative externality in production occurs “when a firm’s production reduces the well-being of others who are not compensated by the firm."[26] For example, if a laundry firm exists near a polluting steel manufacturing firm, there will be increased costs for the laundry firm because of the dirt and smoke produced by the steel manufacturing firm.[27] If external costs exist, such as those created by pollution, the manufacturer will choose to produce more of the product than would be produced if the manufacturer were required to pay all associated environmental costs. Because responsibility or consequence for self-directed action lies partly outside the self, an element of externalization is involved. If there are external benefits, such as in public safety, less of the good may be produced than would be the case if the producer were to receive payment for the external benefits to others. However, goods and services that involve negative externalities in production, such as those that produce pollution, tend to be over-produced and underpriced since the externality is not being priced into the market.[26]
|
46 |
+
|
47 |
+
Pollution can also create costs for the firms producing the pollution. Sometimes firms choose, or are forced by regulation, to reduce the amount of pollution that they are producing. The associated costs of doing this are called abatement costs, or marginal abatement costs if measured by each additional unit.[28] In 2005 pollution abatement capital expenditures and operating costs in the US amounted to nearly $27 billion.[29]
|
48 |
+
|
49 |
+
Society derives some indirect utility from pollution, otherwise there would be no incentive to pollute. This utility comes from the consumption of goods and services that create pollution. Therefore, it is important that policymakers attempt to balance these indirect benefits with the costs of pollution in order to achieve an efficient outcome.[30]
|
50 |
+
|
51 |
+
It is possible to use environmental economics to determine which level of pollution is deemed the social optimum. For economists, pollution is an “external cost and occurs only when one or more individuals suffer a loss of welfare,” however, there exists a socially optimal level of pollution at which welfare is maximized.[31] This is because consumers derive utility from the good or service manufactured, which will outweigh the social cost of pollution until a certain point. At this point the damage of one extra unit of pollution to society, the marginal cost of pollution, is exactly equal to the marginal benefit of consuming one more unit of the good or service.[32]
|
52 |
+
|
53 |
+
In markets with pollution, or other negative externalities in production, the free market equilibrium will not account for the costs of pollution on society. If the social costs of pollution are higher than the private costs incurred by the firm, then the true supply curve will be higher. The point at which the social marginal cost and market demand intersect gives the socially optimal level of pollution. At this point, the quantity will be lower and the price will be higher in comparison to the free market equilibrium.[32] Therefore, the free market outcome could be considered a market failure because it “does not maximize efficiency”.[26]
|
54 |
+
|
55 |
+
This model can be used as a basis to evaluate different methods of internalizing the externality. Some examples include tariffs, a carbon tax and cap and trade systems.
|
56 |
+
|
57 |
+
Air pollution comes from both natural and human-made (anthropogenic) sources. However, globally human-made pollutants from combustion, construction, mining, agriculture and warfare are increasingly significant in the air pollution equation.[33]
|
58 |
+
|
59 |
+
Motor vehicle emissions are one of the leading causes of air pollution.[34][35][36] China, United States, Russia, India[37] Mexico, and Japan are the world leaders in air pollution emissions. Principal stationary pollution sources include chemical plants, coal-fired power plants, oil refineries,[38] petrochemical plants, nuclear waste disposal activity, incinerators, large livestock farms (dairy cows, pigs, poultry, etc.), PVC factories, metals production factories, plastics factories, and other heavy industry. Agricultural air pollution comes from contemporary practices which include clear felling and burning of natural vegetation as well as spraying of pesticides and herbicides[39]
|
60 |
+
|
61 |
+
About 400 million metric tons of hazardous wastes are generated each year.[40] The United States alone produces about 250 million metric tons.[41] Americans constitute less than 5% of the world's population, but produce roughly 25% of the world's CO2,[42] and generate approximately 30% of world's waste.[43][44] In 2007, China overtook the United States as the world's biggest producer of CO2,[45] while still far behind based on per capita pollution (ranked 78th among the world's nations).[46]
|
62 |
+
|
63 |
+
In February 2007, a report by the Intergovernmental Panel on Climate Change (IPCC), representing the work of 2,500 scientists, economists, and policymakers from more than 120 countries, confirmed that humans have been the primary cause of global warming since 1950. Humans have ways to cut greenhouse gas emissions and avoid the consequences of global warming, a major climate report concluded. But to change the climate, the transition from fossil fuels like coal and oil needs to occur within decades, according to the final report this year from the UN's Intergovernmental Panel on Climate Change (IPCC).[47]
|
64 |
+
|
65 |
+
Some of the more common soil contaminants are chlorinated hydrocarbons (CFH), heavy metals (such as chromium, cadmium – found in rechargeable batteries, and lead – found in lead paint, aviation fuel and still in some countries, gasoline), MTBE, zinc, arsenic and benzene. In 2001 a series of press reports culminating in a book called Fateful Harvest unveiled a widespread practice of recycling industrial byproducts into fertilizer, resulting in the contamination of the soil with various metals. Ordinary municipal landfills are the source of many chemical substances entering the soil environment (and often groundwater), emanating from the wide variety of refuse accepted, especially substances illegally discarded there, or from pre-1970 landfills that may have been subject to little control in the U.S. or EU. There have also been some unusual releases of polychlorinated dibenzodioxins, commonly called dioxins for simplicity, such as TCDD.[48]
|
66 |
+
|
67 |
+
Pollution can also be the consequence of a natural disaster. For example, hurricanes often involve water contamination from sewage, and petrochemical spills from ruptured boats or automobiles. Larger scale and environmental damage is not uncommon when coastal oil rigs or refineries are involved. Some sources of pollution, such as nuclear power plants or oil tankers, can produce widespread and potentially hazardous releases when accidents occur.
|
68 |
+
|
69 |
+
In the case of noise pollution the dominant source class is the motor vehicle, producing about ninety percent of all unwanted noise worldwide.
|
70 |
+
|
71 |
+
Adverse air quality can kill many organisms, including humans. Ozone pollution can cause respiratory disease, cardiovascular disease, throat inflammation, chest pain, and congestion. Water pollution causes approximately 14,000 deaths per day, mostly due to contamination of drinking water by untreated sewage in developing countries. An estimated 500 million Indians have no access to a proper toilet,[52][53] Over ten million people in India fell ill with waterborne illnesses in 2013, and 1,535 people died, most of them children.[54] Nearly 500 million Chinese lack access to safe drinking water.[55] A 2010 analysis estimated that 1.2 million people died prematurely each year in China because of air pollution.[56] The high smog levels China has been facing for a long time can do damage to civilians' bodies and cause different diseases.[57] The WHO estimated in 2007 that air pollution causes half a million deaths per year in India.[58] Studies have estimated that the number of people killed annually in the United States could be over 50,000.[59]
|
72 |
+
|
73 |
+
Oil spills can cause skin irritations and rashes. Noise pollution induces hearing loss, high blood pressure, stress, and sleep disturbance. Mercury has been linked to developmental deficits in children and neurologic symptoms. Older people are majorly exposed to diseases induced by air pollution. Those with heart or lung disorders are at additional risk. Children and infants are also at serious risk. Lead and other heavy metals have been shown to cause neurological problems. Chemical and radioactive substances can cause cancer and as well as birth defects.
|
74 |
+
|
75 |
+
An October 2017 study by the Lancet Commission on Pollution and Health found that global pollution, specifically toxic air, water, soils and workplaces, kills nine million people annually, which is triple the number of deaths caused by AIDS, tuberculosis and malaria combined, and 15 times higher than deaths caused by wars and other forms of human violence.[60] The study concluded that "pollution is one of the great existential challenges of the Anthropocene era. Pollution endangers the stability of the Earth’s support systems and threatens the continuing survival of human societies."[3]
|
76 |
+
|
77 |
+
Pollution has been found to be present widely in the environment. There are a number of effects of this:
|
78 |
+
|
79 |
+
The Toxicology and Environmental Health Information Program (TEHIP)[61] at the United States National Library of Medicine (NLM) maintains a comprehensive toxicology and environmental health web site that includes access to resources produced by TEHIP and by other government agencies and organizations. This web site includes links to databases, bibliographies, tutorials, and other scientific and consumer-oriented resources. TEHIP also is responsible for the Toxicology Data Network (TOXNET)[62] an integrated system of toxicology and environmental health databases that are available free of charge on the web.
|
80 |
+
|
81 |
+
TOXMAP is a Geographic Information System (GIS) that is part of TOXNET. TOXMAP uses maps of the United States to help users visually explore data from the United States Environmental Protection Agency's (EPA) Toxics Release Inventory and Superfund Basic Research Programs.
|
82 |
+
|
83 |
+
A 2019 paper linked pollution to adverse school outcomes for children.[63]
|
84 |
+
|
85 |
+
A number of studies show that pollution has an adverse effect on the productivity of both indoor and outdoor workers.[64][65][66][67]
|
86 |
+
|
87 |
+
To protect the environment from the adverse effects of pollution, many nations worldwide have enacted legislation to regulate various types of pollution as well as to mitigate the adverse effects of pollution.
|
88 |
+
|
89 |
+
Pollution control is a term used in environmental management. It means the control of emissions and effluents into air, water or soil. Without pollution control, the waste products from overconsumption, heating, agriculture, mining, manufacturing, transportation and other human activities, whether they accumulate or disperse, will degrade the environment. In the hierarchy of controls, pollution prevention and waste minimization are more desirable than pollution control. In the field of land development, low impact development is a similar technique for the prevention of urban runoff.
|
90 |
+
|
91 |
+
The earliest precursor of pollution generated by life forms would have been a natural function of their existence. The attendant consequences on viability and population levels fell within the sphere of natural selection. These would have included the demise of a population locally or ultimately, species extinction. Processes that were untenable would have resulted in a new balance brought about by changes and adaptations. At the extremes, for any form of life, consideration of pollution is superseded by that of survival.
|
92 |
+
|
93 |
+
For humankind, the factor of technology is a distinguishing and critical consideration, both as an enabler and an additional source of byproducts. Short of survival, human concerns include the range from quality of life to health hazards. Since science holds experimental demonstration to be definitive, modern treatment of toxicity or environmental harm involves defining a level at which an effect is observable. Common examples of fields where practical measurement is crucial include automobile emissions control, industrial exposure (e.g. Occupational Safety and Health Administration (OSHA) PELs), toxicology (e.g. LD50), and medicine (e.g. medication and radiation doses).
|
94 |
+
|
95 |
+
"The solution to pollution is dilution", is a dictum which summarizes a traditional approach to pollution management whereby sufficiently diluted pollution is not harmful.[69][70] It is well-suited to some other modern, locally scoped applications such as laboratory safety procedure and hazardous material release emergency management. But it assumes that the diluent is in virtually unlimited supply for the application or that resulting dilutions are acceptable in all cases.
|
96 |
+
|
97 |
+
Such simple treatment for environmental pollution on a wider scale might have had greater merit in earlier centuries when physical survival was often the highest imperative, human population and densities were lower, technologies were simpler and their byproducts more benign. But these are often no longer the case. Furthermore, advances have enabled measurement of concentrations not possible before. The use of statistical methods in evaluating outcomes has given currency to the principle of probable harm in cases where assessment is warranted but resorting to deterministic models is impractical or infeasible. In addition, consideration of the environment beyond direct impact on human beings has gained prominence.
|
98 |
+
|
99 |
+
Yet in the absence of a superseding principle, this older approach predominates practices throughout the world. It is the basis by which to gauge concentrations of effluent for legal release, exceeding which penalties are assessed or restrictions applied. One such superseding principle is contained in modern hazardous waste laws in developed countries, as the process of diluting hazardous waste to make it non-hazardous is usually a regulated treatment process.[71] Migration from pollution dilution to elimination in many cases can be confronted by challenging economical and technological barriers.
|
100 |
+
|
101 |
+
Carbon dioxide, while vital for photosynthesis, is sometimes referred to as pollution, because raised levels of the gas in the atmosphere are affecting the Earth's climate. Disruption of the environment can also highlight the connection between areas of pollution that would normally be classified separately, such as those of water and air. Recent studies have investigated the potential for long-term rising levels of atmospheric carbon dioxide to cause slight but critical increases in the acidity of ocean waters, and the possible effects of this on marine ecosystems.
|
102 |
+
|
103 |
+
Air pollution fluctuations have been
|
104 |
+
known to strongly depend on the weather dynamics.
|
105 |
+
A recent study developed a multi-layered network analysis and detected strong
|
106 |
+
interlinks between the geopotential height of the upper air ( 5 km) and surface air pollution
|
107 |
+
in both China and the USA.[74] This study indicates that Rossby waves significantly affect air pollution fluctuations
|
108 |
+
through the development of cyclone and anticyclone systems, and further affect the
|
109 |
+
local stability of the air and the winds. The Rossby waves impact on air pollution
|
110 |
+
has been observed in the daily fluctuations in surface air pollution. Thus, the impact
|
111 |
+
of Rossby waves on human life is significant and rapid warming of
|
112 |
+
the Arctic could slow down Rossby waves, thus increasing human health risks.
|
113 |
+
|
114 |
+
The Pure Earth, an international non-for-profit organization dedicated to eliminating life-threatening pollution in the developing world, issues an annual list of some of the world's most polluting industries.[75]
|
115 |
+
|
116 |
+
A 2018 report by the Institute for Agriculture and Trade Policy and GRAIN says that the meat and dairy industries are poised to surpass the oil industry as the world's worst polluters.[76]
|
117 |
+
|
118 |
+
Pure Earth issues an annual list of some of the world's worst polluted places.[77]
|
119 |
+
|
120 |
+
Air pollution
|
121 |
+
|
122 |
+
Soil contamination
|
123 |
+
|
124 |
+
Water pollution
|
125 |
+
|
126 |
+
Other
|
en/4706.html.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
en/4707.html.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
en/4708.html.txt
ADDED
@@ -0,0 +1,140 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Polish (język polski, [ˈjɛ̃zɨk ˈpɔlskʲi] (listen), polszczyzna, [pɔlˈʂt͡ʂɨzna] (listen) or simply polski, [ˈpɔlskʲi] (listen)) is a West Slavic language of the Lechitic group.[9] It is spoken primarily in Poland and serves as the native language of the Poles. In addition to being an official language of Poland, it is also used by Polish minorities in other countries. There are over 50 million[2][1] Polish-language speakers around the world and it is one of the official languages of the European Union.
|
2 |
+
|
3 |
+
Polish is written with the standardized Polish alphabet, which has nine additions to the letters of the basic Latin script (ą, ć, ę, ł, ń, ó, ś, ź, ż). Among the major languages, it is most closely related to Slovak[10] and Czech,[11] but differs from other Slavic varieties in terms of pronunciation and general grammar. In addition, Polish was profoundly influenced by Latin and other Italic languages like Italian and French as well as Germanic languages (most notably German), which contributed to a large number of loanwords and similar grammatical structures.[12][13][14] Polish currently has the largest number of speakers of the West Slavic group and is also the second most widely spoken Slavic language.[15][16]
|
4 |
+
|
5 |
+
Historically, Polish was a lingua franca,[17][18] important both diplomatically and academically in Central and Eastern Europe. Today, Polish is spoken by over 38.5 million people as their first language in Poland. It is also spoken as a second language in Northern Czech Republic and Slovakia, western parts of Belarus and Ukraine as well as in Central-Eastern Lithuania and Latvia. Because of the emigration from Poland during different time periods, most notably after World War II, millions of Polish speakers can be found in countries such as Canada, Argentina, Brazil, Israel, Australia, the United Kingdom and the United States.
|
6 |
+
|
7 |
+
Polish began to emerge as a distinct language around the 10th century, the process largely triggered by the establishment and development of the Polish state. Mieszko I, ruler of the Polans tribe from the Greater Poland region, united a few culturally and linguistically related tribes from the basins of the Vistula and Oder before eventually accepting baptism in 966. With Christianity, Poland also adopted the Latin alphabet, which made it possible to write down Polish, which until then had existed only as a spoken language.[19]
|
8 |
+
|
9 |
+
The precursor to modern Polish is the Old Polish language. Ultimately, Polish is thought to descend from the unattested Proto-Slavic language. Polish was a lingua franca from 1500–1700 in Central and parts of Eastern Europe, because of the political, cultural, scientific and military influence of the former Polish–Lithuanian Commonwealth.[20] Although not closely related to it, Polish shares many linguistic affinities with Ukrainian, an East Slavic language with which it has been in prolonged historical contact and in a state of mutual influence.[11][21][22] The Polish influence on Ukrainian is particularly marked in western Ukraine, which was under Polish cultural domination.[23]
|
10 |
+
|
11 |
+
The Book of Henryków (Polish: Księga henrykowska, Latin: Liber fundationis claustri Sanctae Mariae Virginis in Heinrichau), contains the earliest known sentence written in the Polish language: Day, ut ia pobrusa, a ti poziwai (in modern orthography: Daj, uć ja pobrusza, a ti pocziwaj; the corresponding sentence in modern Polish: Daj, niech ja pomielę, a ty odpoczywaj or Pozwól, że ja będę mełł, a ty odpocznij; and in English: Come, let me grind, and you take a rest), written around 1270.
|
12 |
+
|
13 |
+
The medieval recorder of this phrase, the Cistercian monk Peter of the Henryków monastery, noted that "Hoc est in polonico" ("This is in Polish").[24][25][26]
|
14 |
+
|
15 |
+
Polish, along with Czech and Slovak, forms the West Slavic dialect continuum. The three languages constitute Ausbau languages, i.e. lects that are considered distinct not on purely linguistic grounds, but rather due to sociopolitical and cultural factors.[27] Since the idioms have separately standardized norms and longstanding literary traditions, being the official languages of independent states, they are generally treated as autonomous languages, with the distinction between Polish and Czech-Slovak dialects being drawn along national lines.[27]
|
16 |
+
|
17 |
+
Poland is one of the most linguistically homogeneous European countries; nearly 97% of Poland's citizens declare Polish as their first language. Elsewhere, Poles constitute large minorities in Lithuania, Belarus, and Ukraine. Polish is the most widely used minority language in Lithuania's Vilnius County (26% of the population, according to the 2001 census results, with Vilnius having been part of Poland from 1922 until 1939) and is found elsewhere in southeastern Lithuania. In Ukraine, it is most common in western Lviv and Volyn Oblasts, while in West Belarus it is used by the significant Polish minority, especially in the Brest and Grodno regions and in areas along the Lithuanian border. There are significant numbers of Polish speakers among Polish emigrants and their descendants in many other countries.
|
18 |
+
|
19 |
+
In the United States, Polish Americans number more than 11 million but most of them cannot speak Polish fluently. According to the 2000 United States Census, 667,414 Americans of age five years and over reported Polish as the language spoken at home, which is about 1.4% of people who speak languages other than English, 0.25% of the US population, and 6% of the Polish-American population. The largest concentrations of Polish speakers reported in the census (over 50%) were found in three states: Illinois (185,749), New York (111,740), and New Jersey (74,663).[28] Enough people in these areas speak Polish that PNC Financial Services (which has a large number of branches in all of these areas) offer services available in Polish at all of their cash machines in addition to English and Spanish.[29]
|
20 |
+
|
21 |
+
According to the 2011 census there are now over 500,000 people in England and Wales who consider Polish to be their "main" language. In Canada, there is a significant Polish Canadian population: There are 242,885 speakers of Polish according to the 2006 census, with a particular concentration in Toronto (91,810 speakers) and Montreal.[30]
|
22 |
+
|
23 |
+
The geographical distribution of the Polish language was greatly affected by the territorial changes of Poland immediately after World War II and Polish population transfers (1944–46). Poles settled in the "Recovered Territories" in the west and north, which had previously been mostly German-speaking. Some Poles remained in the previously Polish-ruled territories in the east that were annexed by the USSR, resulting in the present-day Polish-speaking minorities in Lithuania, Belarus, and Ukraine, although many Poles were expelled or emigrated from those areas to areas within Poland's new borders. To the east of Poland, the most significant Polish minority lives in a long, narrow strip along either side of the Lithuania-Belarus border. Meanwhile, the flight and expulsion of Germans (1944–50), as well as the expulsion of Ukrainians and Operation Vistula, the 1947 forced resettlement of Ukrainian minorities to the Recovered Territories in the west of the country, contributed to the country's linguistic homogeneity.
|
24 |
+
|
25 |
+
The Polish language became far more homogeneous in the second half of the 20th century, in part due to the mass migration of several million Polish citizens from the eastern to the western part of the country after the Soviet annexation of the Kresy (Eastern Borderlands) in 1939, and the annexation of former German territory after World War II. This tendency toward homogeneity also stems from the vertically integrated nature of the Polish People's Republic.[31] In addition, Polish linguistics has been characterized by a strong strive towards promoting prescriptive ideas of language intervention and usage uniformity,[32] along with normatively-oriented notions of language "correctness"[33] (unusual by Western standards).[32]
|
26 |
+
|
27 |
+
The inhabitants of different regions of Poland still[update] speak Polish somewhat differently, although the differences between modern-day vernacular varieties and standard Polish (język ogólnopolski) appear relatively slight. Most of the middle aged and young speak vernaculars close to standard Polish, while the traditional dialects are preserved among older people in rural areas.[33] First-language speakers of Polish have no trouble understanding each other, and non-native speakers may have difficulty recognizing the regional and social differences. The modern standard dialect, often termed as "correct Polish",[33] is spoken or at least understood throughout the entire country.[11]
|
28 |
+
|
29 |
+
Polish has traditionally been described as consisting of four or five main regional dialects:
|
30 |
+
|
31 |
+
Kashubian, spoken in Pomerania west of Gdańsk on the Baltic Sea, is thought of either as a fifth Polish dialect or a distinct language, depending on the criteria used.[34][35] It contains a number of features not found elsewhere in Poland, e.g. nine distinct oral vowels (vs. the five of standard Polish) and (in the northern dialects) phonemic word stress, an archaic feature preserved from Common Slavic times and not found anywhere else among the West Slavic languages. However, it "lacks most of the linguistic and social determinants of language-hood".[36]
|
32 |
+
|
33 |
+
Many linguistic sources about the Slavic languages describe Silesian as a dialect of Polish.[37][38] However, many Silesians consider themselves a separate ethnicity and have been advocating for the recognition of a Silesian language. According to the last official census in Poland in 2011, over half a million people declared Silesian as their native language. Many sociolinguists (e.g. Tomasz Kamusella,[39] Agnieszka Pianka, Alfred F. Majewicz,[40] Tomasz Wicherkiewicz)[41] assume that extralinguistic criteria decide whether a lect is an independent language or a dialect: speakers of the speech variety or/and political decisions, and this is dynamic (i.e. it changes over time). Also, research organizations such as SIL International[42] and resources for the academic field of linguistics such as Ethnologue,[43] Linguist List[44] and others, for example the Ministry of Administration and Digitization[45] recognized the Silesian language. In July 2007, the Silesian language was recognized by ISO, and was attributed an ISO code of szl.
|
34 |
+
|
35 |
+
Some additional characteristic but less widespread regional dialects include:
|
36 |
+
|
37 |
+
Polish has six oral vowels (all monophthongs) and two nasal vowels. The oral vowels are /i/ (spelled i), /ɨ/ (spelled y), /ɛ/ (spelled e), /a/ (spelled a), /ɔ/ (spelled o) and /u/ (spelled u or ó). The nasal vowels are /ɛ̃/ (spelled ę) and /ɔ̃/ (spelled ą).
|
38 |
+
|
39 |
+
The Polish consonant system shows more complexity: its characteristic features include the series of affricate and palatal consonants that resulted from four Proto-Slavic palatalizations and two further palatalizations that took place in Polish and Belarusian. The full set of consonants, together with their most common spellings, can be presented as follows (although other phonological analyses exist):
|
40 |
+
|
41 |
+
Neutralization occurs between voiced–voiceless consonant pairs in certain environments: at the end of words (where devoicing occurs), and in certain consonant clusters (where assimilation occurs). For details, see Voicing and devoicing in the article on Polish phonology.
|
42 |
+
|
43 |
+
Most Polish words are paroxytones (that is, the stress falls on the second-to-last syllable of a polysyllabic word), although there are exceptions.
|
44 |
+
|
45 |
+
Polish permits complex consonant clusters, which historically often arose from the disappearance of yers. Polish can have word-initial and word-medial clusters of up to four consonants, whereas word-final clusters can have up to five consonants.[47] Examples of such clusters can be found in words such as bezwzględny [bɛzˈvzɡlɛndnɨ] ('absolute' or 'heartless', 'ruthless'), źdźbło [ˈʑd͡ʑbwɔ] ('blade of grass'), wstrząs [ˈfstʂɔw̃s] ('shock'), and krnąbrność [ˈkrnɔmbrnɔɕt͡ɕ] ('disobedience'). A popular Polish tongue-twister (from a verse by Jan Brzechwa) is W Szczebrzeszynie chrząszcz brzmi w trzcinie [fʂt͡ʂɛbʐɛˈʂɨɲɛ ˈxʂɔw̃ʂt͡ʂ ˈbʐmi fˈtʂt͡ɕiɲɛ] ('In Szczebrzeszyn a beetle buzzes in the reed').
|
46 |
+
|
47 |
+
Unlike languages such as Czech, Polish does not have syllabic consonants – the nucleus of a syllable is always a vowel.[48]
|
48 |
+
|
49 |
+
The consonant /j/ is restricted to positions adjacent to a vowel. It also cannot precede i or y.
|
50 |
+
|
51 |
+
The predominant stress pattern in Polish is penultimate stress – in a word of more than one syllable, the next-to-last syllable is stressed. Alternating preceding syllables carry secondary stress, e.g. in a four-syllable word, where the primary stress is on the third syllable, there will be secondary stress on the first.[49]
|
52 |
+
|
53 |
+
Each vowel represents one syllable, although the letter i normally does not represent a vowel when it precedes another vowel (it represents /j/, palatalization of the preceding consonant, or both depending on analysis). Also the letters u and i sometimes represent only semivowels when they follow another vowel, as in autor /ˈawtɔr/ ('author'), mostly in loanwords (so not in native nauka /naˈu.ka/ 'science, the act of learning', for example, nor in nativized Mateusz /maˈte.uʂ/ 'Matthew').
|
54 |
+
|
55 |
+
Some loanwords, particularly from the classical languages, have the stress on the antepenultimate (third-from-last) syllable. For example, fizyka (/ˈfizɨka/) ('physics') is stressed on the first syllable. This may lead to a rare phenomenon of minimal pairs differing only in stress placement, for example muzyka /ˈmuzɨka/ 'music' vs. muzyka /muˈzɨka/ - genitive singular of muzyk 'musician'. When additional syllables are added to such words through inflection or suffixation, the stress normally becomes regular. For example, uniwersytet (/uɲiˈvɛrsɨtɛt/, 'university') has irregular stress on the third (or antepenultimate) syllable, but the genitive uniwersytetu (/uɲivɛrsɨˈtɛtu/) and derived adjective uniwersytecki (/uɲivɛrsɨˈtɛt͡skʲi/) have regular stress on the penultimate syllables. Over time, loanwords become nativized to have penultimate stress.[50]
|
56 |
+
|
57 |
+
Another class of exceptions is verbs with the conditional endings -by, -bym, -byśmy, etc. These endings are not counted in determining the position of the stress; for example, zrobiłbym ('I would do') is stressed on the first syllable, and zrobilibyśmy ('we would do') on the second. According to prescriptive authorities, the same applies to the first and second person plural past tense endings -śmy, -ście, although this rule is often ignored in colloquial speech (so zrobiliśmy 'we did' should be prescriptively stressed on the second syllable, although in practice it is commonly stressed on the third as zrobiliśmy).[51] These irregular stress patterns are explained by the fact that these endings are detachable clitics rather than true verbal inflections: for example, instead of kogo zobaczyliście? ('whom did you see?') it is possible to say kogoście zobaczyli? – here kogo retains its usual stress (first syllable) in spite of the attachment of the clitic. Reanalysis of the endings as inflections when attached to verbs causes the different colloquial stress patterns. These stress patterns are however nowadays sanctioned as part of the colloquial norm of standard Polish.[52]
|
58 |
+
|
59 |
+
Some common word combinations are stressed as if they were a single word. This applies in particular to many combinations of preposition plus a personal pronoun, such as do niej ('to her'), na nas ('on us'), przeze mnie ('because of me'), all stressed on the bolded syllable.
|
60 |
+
|
61 |
+
The Polish alphabet derives from the Latin script, but includes certain additional letters formed using diacritics. The Polish alphabet was one of three major forms of Latin-based orthography developed for Slavic languages, the others being Czech orthography and Croatian orthography, the last of these being a 19th-century invention trying to make a compromise between the first two. Kashubian uses a Polish-based system, Slovak uses a Czech-based system, and Slovene follows the Croatian one; the Sorbian languages blend the Polish and the Czech ones.
|
62 |
+
|
63 |
+
The diacritics used in the Polish alphabet are the kreska (graphically similar to the acute accent) in the letters ć, ń, ó, ś, ź and through the letter in ł; the kropka (superior dot) in the letter ż, and the ogonek ("little tail") in the letters ą, ę. The letters q, v, x are used only in foreign words and names.[53]
|
64 |
+
|
65 |
+
Polish orthography is largely phonemic—there is a consistent correspondence between letters (or digraphs and trigraphs) and phonemes (for exceptions see below). The letters of the alphabet and their normal phonemic values are listed in the following table.
|
66 |
+
|
67 |
+
The following digraphs and trigraphs are used:
|
68 |
+
|
69 |
+
Voiced consonant letters frequently come to represent voiceless sounds (as shown in the tables); this occurs at the end of words and in certain clusters, due to the neutralization mentioned in the Phonology section above. Occasionally also voiceless consonant letters can represent voiced sounds in clusters.
|
70 |
+
|
71 |
+
The spelling rule for the palatal sounds /ɕ/, /ʑ/, /tɕ/, /dʑ/ and /ɲ/ is as follows: before the vowel i the plain letters s, z, c, dz, n are used; before other vowels the combinations si, zi, ci, dzi, ni are used; when not followed by a vowel the diacritic forms ś, ź, ć, dź, ń are used. For example, the s in siwy ("grey-haired"), the si in siarka ("sulphur") and the ś in święty ("holy") all represent the sound /ɕ/. The exceptions to the above rule are certain loanwords from Latin, Italian, French, Russian or English—where s before i is pronounced as s, e.g. sinus, sinologia, do re mi fa sol la si do, Saint-Simon i saint-simoniści, Sierioża, Siergiej, Singapur, singiel. In other loanwords the vowel i is changed to y, e.g. Syria, Sybir, synchronizacja, Syrakuzy.
|
72 |
+
|
73 |
+
The following table shows the correspondence between the sounds and spelling:
|
74 |
+
|
75 |
+
Digraphs and trigraphs are used:
|
76 |
+
|
77 |
+
Similar principles apply to /kʲ/, /ɡʲ/, /xʲ/ and /lʲ/, except that these can only occur before vowels, so the spellings are k, g, (c)h, l before i, and ki, gi, (c)hi, li otherwise. Most Polish speakers, however, do not consider palatalisation of k, g, (c)h or l as creating new sounds.
|
78 |
+
|
79 |
+
Except in the cases mentioned above, the letter i if followed by another vowel in the same word usually represents /j/, yet a palatalisation of the previous consonant is always assumed.
|
80 |
+
|
81 |
+
The letters ą and ę, when followed by plosives and affricates, represent an oral vowel followed by a nasal consonant, rather than a nasal vowel. For example, ą in dąb ("oak") is pronounced /ɔm/, and ę in tęcza ("rainbow") is pronounced /ɛn/ (the nasal assimilates to the following consonant). When followed by l or ł (for example przyjęli, przyjęły), ę is pronounced as just e. When ę is at the end of the word it is often pronounced as just /ɛ/.
|
82 |
+
|
83 |
+
Note that, depending on the word, the phoneme /x/ can be spelt h or ch, the phoneme /ʐ/ can be spelt ż or rz, and /u/ can be spelt u or ó. In several cases it determines the meaning, for example: może ("maybe") and morze ("sea").
|
84 |
+
|
85 |
+
In occasional words, letters that normally form a digraph are pronounced separately. For example, rz represents /rz/, not /ʐ/, in words like zamarzać ("freeze") and in the name Tarzan.
|
86 |
+
|
87 |
+
Notice that doubled letters represent separate occurrences of the sound in question; for example Anna is pronounced /anːa/ in Polish (the double n is often pronounced as a lengthened single n).
|
88 |
+
|
89 |
+
There are certain clusters where a written consonant would not be pronounced. For example, the ł in the words mógł ("could") and jabłko ("apple") might be omitted in ordinary speech, leading to the pronunciations muk and japko or jabko.
|
90 |
+
|
91 |
+
Polish is a highly fusional language with relatively free word order, although the dominant arrangement is subject–verb–object (SVO). There are no articles, and subject pronouns are often dropped.
|
92 |
+
|
93 |
+
Nouns belong to one of three genders: masculine, feminine and neuter. A distinction is also made between animate and inanimate masculine nouns in the singular, and between masculine personal and non-masculine-personal nouns in the plural. There are seven cases: nominative, genitive, dative, accusative, instrumental, locative and vocative.
|
94 |
+
|
95 |
+
Adjectives agree with nouns in terms of gender, case, and number. Attributive adjectives most commonly precede the noun, although in certain cases, especially in fixed phrases (like język polski, "Polish (language)"), the noun may come first; the rule of thumb is that generic descriptive adjective normally precedes (e.g. piękny kwiat, “beautiful flower”) while categorising adjective often follows the noun (e.g. węgiel kamienny, “black
|
96 |
+
coal”). Most short adjectives and their derived adverbs form comparatives and superlatives by inflection (the superlative is formed by prefixing naj- to the comparative).
|
97 |
+
|
98 |
+
Verbs are of imperfective or perfective aspect, often occurring in pairs. Imperfective verbs have a present tense, past tense, compound future tense (except for być "to be", which has a simple future będę etc., this in turn being used to form the compound future of other verbs), subjunctive/conditional (formed with the detachable particle by), imperatives, an infinitive, present participle, present gerund and past participle. Perfective verbs have a simple future tense (formed like the present tense of imperfective verbs), past tense, subjunctive/conditional, imperatives, infinitive, present gerund and past participle. Conjugated verb forms agree with their subject in terms of person, number, and (in the case of past tense and subjunctive/conditional forms) gender.
|
99 |
+
|
100 |
+
Passive-type constructions can be made using the auxiliary być or zostać ("become") with the passive participle. There is also an impersonal construction where the active verb is used (in third person singular) with no subject, but with the reflexive pronoun się present to indicate a general, unspecified subject (as in pije się wódkę "vodka is being drunk"—note that wódka appears in the accusative). A similar sentence type in the past tense uses the passive participle with the ending -o, as in widziano ludzi ("people were seen"). As in other Slavic languages, there are also subjectless sentences formed using such words as można ("it is possible") together with an infinitive.
|
101 |
+
|
102 |
+
Yes-no questions (both direct and indirect) are formed by placing the word czy at the start. Negation uses the word nie, before the verb or other item being negated; nie is still added before the verb even if the sentence also contains other negatives such as nigdy ("never") or nic ("nothing"), effectively creating a double negative.
|
103 |
+
|
104 |
+
Cardinal numbers have a complex system of inflection and agreement. Zero and cardinal numbers higher than five (except for those ending with the digit 2, 3 or 4 but not ending with 12, 13 or 14) govern the genitive case rather than the nominative or accusative. Special forms of numbers (collective numerals) are used with certain classes of noun, which include dziecko ("child") and exclusively plural nouns such as drzwi ("door").
|
105 |
+
|
106 |
+
Polish has, over the centuries, borrowed a number of words from other languages. When borrowing, pronunciation was adapted to Polish phonemes and spelling was altered to match Polish orthography. In addition, word endings are liberally applied to almost any word to produce verbs, nouns, adjectives, as well as adding the appropriate endings for cases of nouns, adjectives, diminutives, double-diminutives, augmentatives, etc.
|
107 |
+
|
108 |
+
Depending on the historical period, borrowing has proceeded from various languages. Notable influences have been Latin (10th–18th centuries),[21] Czech (10th and 14th–15th centuries), Italian (16th–17th centuries),[21] French (17th–19th centuries),[21] German (13–15th and 18th–20th centuries), Hungarian (15th–16th centuries)[21] and Turkish (17th century). Currently, English words are the most common imports to Polish.[54]
|
109 |
+
|
110 |
+
The Latin language, for a very long time the only official language of the Polish state, has had a great influence on Polish. Many Polish words were direct borrowings or calques (e.g. rzeczpospolita from res publica) from Latin. Latin was known to a larger or smaller degree by most of the numerous szlachta in the 16th to 18th centuries (and it continued to be extensively taught at secondary schools until World War II). Apart from dozens of loanwords, its influence can also be seen in a number of verbatim Latin phrases in Polish literature (especially from the 19th century and earlier).
|
111 |
+
|
112 |
+
During the 12th and 13th centuries, Mongolian words were brought to the Polish language during wars with the armies of Genghis Khan and his descendants, e.g. dzida (spear) and szereg (a line or row).[54]
|
113 |
+
|
114 |
+
Words from Czech, an important influence during the 10th and 14th–15th centuries include sejm, hańba and brama.[54]
|
115 |
+
|
116 |
+
In 1518, the Polish king Sigismund I the Old married Bona Sforza, the niece of the Holy Roman emperor Maximilian, who introduced Italian cuisine to Poland, especially vegetables.[55] Hence, words from Italian include pomidor from "pomodoro" (tomato), kalafior from "cavolfiore" (cauliflower), and pomarańcza, a portmanteau from Italian "pomo" (pome) plus "arancio" (orange). A later word of Italian origin is autostrada (from Italian "autostrada", highway).[55]
|
117 |
+
|
118 |
+
In the 18th century, with the rising prominence of France in Europe, French supplanted Latin as an important source of words. Some French borrowings also date from the Napoleonic era, when the Poles were enthusiastic supporters of Napoleon. Examples include ekran (from French "écran", screen), abażur ("abat-jour", lamp shade), rekin ("requin", shark), meble ("meuble", furniture), bagaż ("bagage", luggage), walizka ("valise", suitcase), fotel ("fauteuil", armchair), plaża ("plage", beach) and koszmar ("cauchemar", nightmare). Some place names have also been adapted from French, such as the Warsaw borough of Żoliborz ("joli bord" = beautiful riverside), as well as the town of Żyrardów (from the name Girard, with the Polish suffix -ów attached to refer to the founder of the town).[56]
|
119 |
+
|
120 |
+
Many words were borrowed from the German language from the sizable German population in Polish cities during medieval times. German words found in the Polish language are often connected with trade, the building industry, civic rights and city life. Some words were assimilated verbatim, for example handel (trade) and dach (roof); others are pronounced the same, but differ in writing schnur—sznur (cord). As a result of being neighbours with Germany, Polish has many German expressions which have become literally translated (calques). The regional dialects of Upper Silesia and Masuria (Modern Polish East Prussia) have noticeably more German loanwords than other varieties.
|
121 |
+
|
122 |
+
The contacts with Ottoman Turkey in the 17th century brought many new words, some of them still in use, such as: jar ("yar" deep valley), szaszłyk ("şişlik" shish kebab), filiżanka ("fincan" cup), arbuz ("karpuz" watermelon), dywan ("divan" carpet),[57] etc.
|
123 |
+
|
124 |
+
From the founding of the Kingdom of Poland in 1025 through the early years of the Polish-Lithuanian Commonwealth created in 1569, Poland was the most tolerant country of Jews in Europe. Known as the "paradise for the Jews",[58][59] it became a shelter for persecuted and expelled European Jewish communities and the home to the world's largest Jewish community of the time. As a result, many Polish words come from Yiddish, spoken by the large Polish Jewish population that existed until the Holocaust. Borrowed Yiddish words include bachor (an unruly boy or child), bajzel (slang for mess), belfer (slang for teacher), ciuchy (slang for clothing), cymes (slang for very tasty food), geszeft (slang for business), kitel (slang for apron), machlojka (slang for scam), mamona (money), manele (slang for oddments), myszygene (slang for lunatic), pinda (slang for girl, pejoratively), plajta (slang for bankruptcy), rejwach (noise), szmal (slang for money), and trefny (dodgy).[60]
|
125 |
+
|
126 |
+
The mountain dialects of the Górale in southern Poland, have quite a number of words borrowed from Hungarian (e.g. baca, gazda, juhas, hejnał) and Romanian as a result of historical contacts with Hungarian-dominated Slovakia and Wallachian herders who travelled north along the Carpathians.[61]
|
127 |
+
|
128 |
+
Thieves' slang includes such words as kimać (to sleep) or majcher (knife) of Greek origin, considered then unknown to the outside world.[62]
|
129 |
+
|
130 |
+
In addition, Turkish and Tatar have exerted influence upon the vocabulary of war, names of oriental costumes etc.[21] Russian borrowings began to make their way into Polish from the second half of the 19th century on.[21]
|
131 |
+
|
132 |
+
Polish has also received an intensive number of English loanwords, particularly after World War II.[21] Recent loanwords come primarily from the English language, mainly those that have Latin or Greek roots, for example komputer (computer), korupcja (from 'corruption', but sense restricted to 'bribery') etc. Concatenation of parts of words (e.g. auto-moto), which is not native to Polish but common in English, for example, is also sometimes used. When borrowing English words, Polish often changes their spelling. For example, Latin suffix '-tio' corresponds to -cja. To make the word plural, -cja becomes -cje. Examples of this include inauguracja (inauguration), dewastacja (devastation), recepcja (reception), konurbacja (conurbation) and konotacje (connotations). Also, the digraph qu becomes kw (kwadrant = quadrant; kworum = quorum).
|
133 |
+
|
134 |
+
The Polish language has influenced others. Particular influences appear in other Slavic languages and in German — due to their proximity and shared borders.[63] Examples of loanwords include German Grenze (border),[64] Dutch and Afrikaans grens from Polish granica; German Peitzker from Polish piskorz (weatherfish); German Zobel, French zibeline, Swedish sobel, and English sable from Polish soból; and ogonek ("little tail") — the word describing a diacritic hook-sign added below some letters in various alphabets. "Szmata," a Polish, Slovak and Ruthenian word for "mop" or "rag", became part of Yiddish. The Polish language exerted significant lexical influence upon Ukrainian, particularly in the fields of abstract and technical terminology; for example, the Ukrainian word панство panstvo (country) is derived from Polish państwo.[23] The extent of Polish influence is particularly noticeable in Western Ukrainian dialects.[23]
|
135 |
+
|
136 |
+
There is a substantial number of Polish words which officially became part of Yiddish, once the main language of European Jews. These include basic items, objects or terms such as a bread bun (Polish bułka, Yiddish בולקע bulke), a fishing rod (wędka, ווענטקע ventke), an oak (dąb, דעמב demb), a meadow (łąka, לאָנקע lonke), a moustache (wąsy, וואָנצעס vontses) and a bladder (pęcherz, פּענכער penkher).[65]
|
137 |
+
|
138 |
+
Quite a few culinary loanwords exist in German and in other languages, some of which describe distinctive features of Polish cuisine. These include German and English Quark from twaróg (a kind of fresh cheese) and German Gurke, English gherkin from ogórek (cucumber). The word pierogi (Polish dumplings) has spread internationally, as well as pączki (Polish donuts)[66] and kiełbasa (sausage, e.g. kolbaso in Esperanto). As far as pierogi concerned, the original Polish word is already in plural (sing. pieróg, plural pierogi; stem pierog-, plural ending -i; NB. o becomes ó in a closed syllable, like here in singular), yet it is commonly used with the English plural ending -s in Canada and United States of America, pierogis, thus making it a "double plural". A similar situation happened with the Polish loanword from English czipsy ("potato chips")—from English chips being already plural in the original (chip + -s), yet it has obtained the Polish plural ending -y.
|
139 |
+
|
140 |
+
The word spruce entered the English language from the Polish name of Prusy (a historical region, today part of Poland). It became spruce because in Polish, z Prus, sounded like "spruce" in English (transl. "from Prussia") and was a generic term for commodities brought to England by Hanseatic merchants and because the tree was believed to have come from Polish Ducal Prussia.[67] However, it can be argued that the word is actually derived from the Old French term Pruce, meaning literally Prussia.[68]
|
en/4709.html.txt
ADDED
@@ -0,0 +1,140 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Polish (język polski, [ˈjɛ̃zɨk ˈpɔlskʲi] (listen), polszczyzna, [pɔlˈʂt͡ʂɨzna] (listen) or simply polski, [ˈpɔlskʲi] (listen)) is a West Slavic language of the Lechitic group.[9] It is spoken primarily in Poland and serves as the native language of the Poles. In addition to being an official language of Poland, it is also used by Polish minorities in other countries. There are over 50 million[2][1] Polish-language speakers around the world and it is one of the official languages of the European Union.
|
2 |
+
|
3 |
+
Polish is written with the standardized Polish alphabet, which has nine additions to the letters of the basic Latin script (ą, ć, ę, ł, ń, ó, ś, ź, ż). Among the major languages, it is most closely related to Slovak[10] and Czech,[11] but differs from other Slavic varieties in terms of pronunciation and general grammar. In addition, Polish was profoundly influenced by Latin and other Italic languages like Italian and French as well as Germanic languages (most notably German), which contributed to a large number of loanwords and similar grammatical structures.[12][13][14] Polish currently has the largest number of speakers of the West Slavic group and is also the second most widely spoken Slavic language.[15][16]
|
4 |
+
|
5 |
+
Historically, Polish was a lingua franca,[17][18] important both diplomatically and academically in Central and Eastern Europe. Today, Polish is spoken by over 38.5 million people as their first language in Poland. It is also spoken as a second language in Northern Czech Republic and Slovakia, western parts of Belarus and Ukraine as well as in Central-Eastern Lithuania and Latvia. Because of the emigration from Poland during different time periods, most notably after World War II, millions of Polish speakers can be found in countries such as Canada, Argentina, Brazil, Israel, Australia, the United Kingdom and the United States.
|
6 |
+
|
7 |
+
Polish began to emerge as a distinct language around the 10th century, the process largely triggered by the establishment and development of the Polish state. Mieszko I, ruler of the Polans tribe from the Greater Poland region, united a few culturally and linguistically related tribes from the basins of the Vistula and Oder before eventually accepting baptism in 966. With Christianity, Poland also adopted the Latin alphabet, which made it possible to write down Polish, which until then had existed only as a spoken language.[19]
|
8 |
+
|
9 |
+
The precursor to modern Polish is the Old Polish language. Ultimately, Polish is thought to descend from the unattested Proto-Slavic language. Polish was a lingua franca from 1500–1700 in Central and parts of Eastern Europe, because of the political, cultural, scientific and military influence of the former Polish–Lithuanian Commonwealth.[20] Although not closely related to it, Polish shares many linguistic affinities with Ukrainian, an East Slavic language with which it has been in prolonged historical contact and in a state of mutual influence.[11][21][22] The Polish influence on Ukrainian is particularly marked in western Ukraine, which was under Polish cultural domination.[23]
|
10 |
+
|
11 |
+
The Book of Henryków (Polish: Księga henrykowska, Latin: Liber fundationis claustri Sanctae Mariae Virginis in Heinrichau), contains the earliest known sentence written in the Polish language: Day, ut ia pobrusa, a ti poziwai (in modern orthography: Daj, uć ja pobrusza, a ti pocziwaj; the corresponding sentence in modern Polish: Daj, niech ja pomielę, a ty odpoczywaj or Pozwól, że ja będę mełł, a ty odpocznij; and in English: Come, let me grind, and you take a rest), written around 1270.
|
12 |
+
|
13 |
+
The medieval recorder of this phrase, the Cistercian monk Peter of the Henryków monastery, noted that "Hoc est in polonico" ("This is in Polish").[24][25][26]
|
14 |
+
|
15 |
+
Polish, along with Czech and Slovak, forms the West Slavic dialect continuum. The three languages constitute Ausbau languages, i.e. lects that are considered distinct not on purely linguistic grounds, but rather due to sociopolitical and cultural factors.[27] Since the idioms have separately standardized norms and longstanding literary traditions, being the official languages of independent states, they are generally treated as autonomous languages, with the distinction between Polish and Czech-Slovak dialects being drawn along national lines.[27]
|
16 |
+
|
17 |
+
Poland is one of the most linguistically homogeneous European countries; nearly 97% of Poland's citizens declare Polish as their first language. Elsewhere, Poles constitute large minorities in Lithuania, Belarus, and Ukraine. Polish is the most widely used minority language in Lithuania's Vilnius County (26% of the population, according to the 2001 census results, with Vilnius having been part of Poland from 1922 until 1939) and is found elsewhere in southeastern Lithuania. In Ukraine, it is most common in western Lviv and Volyn Oblasts, while in West Belarus it is used by the significant Polish minority, especially in the Brest and Grodno regions and in areas along the Lithuanian border. There are significant numbers of Polish speakers among Polish emigrants and their descendants in many other countries.
|
18 |
+
|
19 |
+
In the United States, Polish Americans number more than 11 million but most of them cannot speak Polish fluently. According to the 2000 United States Census, 667,414 Americans of age five years and over reported Polish as the language spoken at home, which is about 1.4% of people who speak languages other than English, 0.25% of the US population, and 6% of the Polish-American population. The largest concentrations of Polish speakers reported in the census (over 50%) were found in three states: Illinois (185,749), New York (111,740), and New Jersey (74,663).[28] Enough people in these areas speak Polish that PNC Financial Services (which has a large number of branches in all of these areas) offer services available in Polish at all of their cash machines in addition to English and Spanish.[29]
|
20 |
+
|
21 |
+
According to the 2011 census there are now over 500,000 people in England and Wales who consider Polish to be their "main" language. In Canada, there is a significant Polish Canadian population: There are 242,885 speakers of Polish according to the 2006 census, with a particular concentration in Toronto (91,810 speakers) and Montreal.[30]
|
22 |
+
|
23 |
+
The geographical distribution of the Polish language was greatly affected by the territorial changes of Poland immediately after World War II and Polish population transfers (1944–46). Poles settled in the "Recovered Territories" in the west and north, which had previously been mostly German-speaking. Some Poles remained in the previously Polish-ruled territories in the east that were annexed by the USSR, resulting in the present-day Polish-speaking minorities in Lithuania, Belarus, and Ukraine, although many Poles were expelled or emigrated from those areas to areas within Poland's new borders. To the east of Poland, the most significant Polish minority lives in a long, narrow strip along either side of the Lithuania-Belarus border. Meanwhile, the flight and expulsion of Germans (1944–50), as well as the expulsion of Ukrainians and Operation Vistula, the 1947 forced resettlement of Ukrainian minorities to the Recovered Territories in the west of the country, contributed to the country's linguistic homogeneity.
|
24 |
+
|
25 |
+
The Polish language became far more homogeneous in the second half of the 20th century, in part due to the mass migration of several million Polish citizens from the eastern to the western part of the country after the Soviet annexation of the Kresy (Eastern Borderlands) in 1939, and the annexation of former German territory after World War II. This tendency toward homogeneity also stems from the vertically integrated nature of the Polish People's Republic.[31] In addition, Polish linguistics has been characterized by a strong strive towards promoting prescriptive ideas of language intervention and usage uniformity,[32] along with normatively-oriented notions of language "correctness"[33] (unusual by Western standards).[32]
|
26 |
+
|
27 |
+
The inhabitants of different regions of Poland still[update] speak Polish somewhat differently, although the differences between modern-day vernacular varieties and standard Polish (język ogólnopolski) appear relatively slight. Most of the middle aged and young speak vernaculars close to standard Polish, while the traditional dialects are preserved among older people in rural areas.[33] First-language speakers of Polish have no trouble understanding each other, and non-native speakers may have difficulty recognizing the regional and social differences. The modern standard dialect, often termed as "correct Polish",[33] is spoken or at least understood throughout the entire country.[11]
|
28 |
+
|
29 |
+
Polish has traditionally been described as consisting of four or five main regional dialects:
|
30 |
+
|
31 |
+
Kashubian, spoken in Pomerania west of Gdańsk on the Baltic Sea, is thought of either as a fifth Polish dialect or a distinct language, depending on the criteria used.[34][35] It contains a number of features not found elsewhere in Poland, e.g. nine distinct oral vowels (vs. the five of standard Polish) and (in the northern dialects) phonemic word stress, an archaic feature preserved from Common Slavic times and not found anywhere else among the West Slavic languages. However, it "lacks most of the linguistic and social determinants of language-hood".[36]
|
32 |
+
|
33 |
+
Many linguistic sources about the Slavic languages describe Silesian as a dialect of Polish.[37][38] However, many Silesians consider themselves a separate ethnicity and have been advocating for the recognition of a Silesian language. According to the last official census in Poland in 2011, over half a million people declared Silesian as their native language. Many sociolinguists (e.g. Tomasz Kamusella,[39] Agnieszka Pianka, Alfred F. Majewicz,[40] Tomasz Wicherkiewicz)[41] assume that extralinguistic criteria decide whether a lect is an independent language or a dialect: speakers of the speech variety or/and political decisions, and this is dynamic (i.e. it changes over time). Also, research organizations such as SIL International[42] and resources for the academic field of linguistics such as Ethnologue,[43] Linguist List[44] and others, for example the Ministry of Administration and Digitization[45] recognized the Silesian language. In July 2007, the Silesian language was recognized by ISO, and was attributed an ISO code of szl.
|
34 |
+
|
35 |
+
Some additional characteristic but less widespread regional dialects include:
|
36 |
+
|
37 |
+
Polish has six oral vowels (all monophthongs) and two nasal vowels. The oral vowels are /i/ (spelled i), /ɨ/ (spelled y), /ɛ/ (spelled e), /a/ (spelled a), /ɔ/ (spelled o) and /u/ (spelled u or ó). The nasal vowels are /ɛ̃/ (spelled ę) and /ɔ̃/ (spelled ą).
|
38 |
+
|
39 |
+
The Polish consonant system shows more complexity: its characteristic features include the series of affricate and palatal consonants that resulted from four Proto-Slavic palatalizations and two further palatalizations that took place in Polish and Belarusian. The full set of consonants, together with their most common spellings, can be presented as follows (although other phonological analyses exist):
|
40 |
+
|
41 |
+
Neutralization occurs between voiced–voiceless consonant pairs in certain environments: at the end of words (where devoicing occurs), and in certain consonant clusters (where assimilation occurs). For details, see Voicing and devoicing in the article on Polish phonology.
|
42 |
+
|
43 |
+
Most Polish words are paroxytones (that is, the stress falls on the second-to-last syllable of a polysyllabic word), although there are exceptions.
|
44 |
+
|
45 |
+
Polish permits complex consonant clusters, which historically often arose from the disappearance of yers. Polish can have word-initial and word-medial clusters of up to four consonants, whereas word-final clusters can have up to five consonants.[47] Examples of such clusters can be found in words such as bezwzględny [bɛzˈvzɡlɛndnɨ] ('absolute' or 'heartless', 'ruthless'), źdźbło [ˈʑd͡ʑbwɔ] ('blade of grass'), wstrząs [ˈfstʂɔw̃s] ('shock'), and krnąbrność [ˈkrnɔmbrnɔɕt͡ɕ] ('disobedience'). A popular Polish tongue-twister (from a verse by Jan Brzechwa) is W Szczebrzeszynie chrząszcz brzmi w trzcinie [fʂt͡ʂɛbʐɛˈʂɨɲɛ ˈxʂɔw̃ʂt͡ʂ ˈbʐmi fˈtʂt͡ɕiɲɛ] ('In Szczebrzeszyn a beetle buzzes in the reed').
|
46 |
+
|
47 |
+
Unlike languages such as Czech, Polish does not have syllabic consonants – the nucleus of a syllable is always a vowel.[48]
|
48 |
+
|
49 |
+
The consonant /j/ is restricted to positions adjacent to a vowel. It also cannot precede i or y.
|
50 |
+
|
51 |
+
The predominant stress pattern in Polish is penultimate stress – in a word of more than one syllable, the next-to-last syllable is stressed. Alternating preceding syllables carry secondary stress, e.g. in a four-syllable word, where the primary stress is on the third syllable, there will be secondary stress on the first.[49]
|
52 |
+
|
53 |
+
Each vowel represents one syllable, although the letter i normally does not represent a vowel when it precedes another vowel (it represents /j/, palatalization of the preceding consonant, or both depending on analysis). Also the letters u and i sometimes represent only semivowels when they follow another vowel, as in autor /ˈawtɔr/ ('author'), mostly in loanwords (so not in native nauka /naˈu.ka/ 'science, the act of learning', for example, nor in nativized Mateusz /maˈte.uʂ/ 'Matthew').
|
54 |
+
|
55 |
+
Some loanwords, particularly from the classical languages, have the stress on the antepenultimate (third-from-last) syllable. For example, fizyka (/ˈfizɨka/) ('physics') is stressed on the first syllable. This may lead to a rare phenomenon of minimal pairs differing only in stress placement, for example muzyka /ˈmuzɨka/ 'music' vs. muzyka /muˈzɨka/ - genitive singular of muzyk 'musician'. When additional syllables are added to such words through inflection or suffixation, the stress normally becomes regular. For example, uniwersytet (/uɲiˈvɛrsɨtɛt/, 'university') has irregular stress on the third (or antepenultimate) syllable, but the genitive uniwersytetu (/uɲivɛrsɨˈtɛtu/) and derived adjective uniwersytecki (/uɲivɛrsɨˈtɛt͡skʲi/) have regular stress on the penultimate syllables. Over time, loanwords become nativized to have penultimate stress.[50]
|
56 |
+
|
57 |
+
Another class of exceptions is verbs with the conditional endings -by, -bym, -byśmy, etc. These endings are not counted in determining the position of the stress; for example, zrobiłbym ('I would do') is stressed on the first syllable, and zrobilibyśmy ('we would do') on the second. According to prescriptive authorities, the same applies to the first and second person plural past tense endings -śmy, -ście, although this rule is often ignored in colloquial speech (so zrobiliśmy 'we did' should be prescriptively stressed on the second syllable, although in practice it is commonly stressed on the third as zrobiliśmy).[51] These irregular stress patterns are explained by the fact that these endings are detachable clitics rather than true verbal inflections: for example, instead of kogo zobaczyliście? ('whom did you see?') it is possible to say kogoście zobaczyli? – here kogo retains its usual stress (first syllable) in spite of the attachment of the clitic. Reanalysis of the endings as inflections when attached to verbs causes the different colloquial stress patterns. These stress patterns are however nowadays sanctioned as part of the colloquial norm of standard Polish.[52]
|
58 |
+
|
59 |
+
Some common word combinations are stressed as if they were a single word. This applies in particular to many combinations of preposition plus a personal pronoun, such as do niej ('to her'), na nas ('on us'), przeze mnie ('because of me'), all stressed on the bolded syllable.
|
60 |
+
|
61 |
+
The Polish alphabet derives from the Latin script, but includes certain additional letters formed using diacritics. The Polish alphabet was one of three major forms of Latin-based orthography developed for Slavic languages, the others being Czech orthography and Croatian orthography, the last of these being a 19th-century invention trying to make a compromise between the first two. Kashubian uses a Polish-based system, Slovak uses a Czech-based system, and Slovene follows the Croatian one; the Sorbian languages blend the Polish and the Czech ones.
|
62 |
+
|
63 |
+
The diacritics used in the Polish alphabet are the kreska (graphically similar to the acute accent) in the letters ć, ń, ó, ś, ź and through the letter in ł; the kropka (superior dot) in the letter ż, and the ogonek ("little tail") in the letters ą, ę. The letters q, v, x are used only in foreign words and names.[53]
|
64 |
+
|
65 |
+
Polish orthography is largely phonemic—there is a consistent correspondence between letters (or digraphs and trigraphs) and phonemes (for exceptions see below). The letters of the alphabet and their normal phonemic values are listed in the following table.
|
66 |
+
|
67 |
+
The following digraphs and trigraphs are used:
|
68 |
+
|
69 |
+
Voiced consonant letters frequently come to represent voiceless sounds (as shown in the tables); this occurs at the end of words and in certain clusters, due to the neutralization mentioned in the Phonology section above. Occasionally also voiceless consonant letters can represent voiced sounds in clusters.
|
70 |
+
|
71 |
+
The spelling rule for the palatal sounds /ɕ/, /ʑ/, /tɕ/, /dʑ/ and /ɲ/ is as follows: before the vowel i the plain letters s, z, c, dz, n are used; before other vowels the combinations si, zi, ci, dzi, ni are used; when not followed by a vowel the diacritic forms ś, ź, ć, dź, ń are used. For example, the s in siwy ("grey-haired"), the si in siarka ("sulphur") and the ś in święty ("holy") all represent the sound /ɕ/. The exceptions to the above rule are certain loanwords from Latin, Italian, French, Russian or English—where s before i is pronounced as s, e.g. sinus, sinologia, do re mi fa sol la si do, Saint-Simon i saint-simoniści, Sierioża, Siergiej, Singapur, singiel. In other loanwords the vowel i is changed to y, e.g. Syria, Sybir, synchronizacja, Syrakuzy.
|
72 |
+
|
73 |
+
The following table shows the correspondence between the sounds and spelling:
|
74 |
+
|
75 |
+
Digraphs and trigraphs are used:
|
76 |
+
|
77 |
+
Similar principles apply to /kʲ/, /ɡʲ/, /xʲ/ and /lʲ/, except that these can only occur before vowels, so the spellings are k, g, (c)h, l before i, and ki, gi, (c)hi, li otherwise. Most Polish speakers, however, do not consider palatalisation of k, g, (c)h or l as creating new sounds.
|
78 |
+
|
79 |
+
Except in the cases mentioned above, the letter i if followed by another vowel in the same word usually represents /j/, yet a palatalisation of the previous consonant is always assumed.
|
80 |
+
|
81 |
+
The letters ą and ę, when followed by plosives and affricates, represent an oral vowel followed by a nasal consonant, rather than a nasal vowel. For example, ą in dąb ("oak") is pronounced /ɔm/, and ę in tęcza ("rainbow") is pronounced /ɛn/ (the nasal assimilates to the following consonant). When followed by l or ł (for example przyjęli, przyjęły), ę is pronounced as just e. When ę is at the end of the word it is often pronounced as just /ɛ/.
|
82 |
+
|
83 |
+
Note that, depending on the word, the phoneme /x/ can be spelt h or ch, the phoneme /ʐ/ can be spelt ż or rz, and /u/ can be spelt u or ó. In several cases it determines the meaning, for example: może ("maybe") and morze ("sea").
|
84 |
+
|
85 |
+
In occasional words, letters that normally form a digraph are pronounced separately. For example, rz represents /rz/, not /ʐ/, in words like zamarzać ("freeze") and in the name Tarzan.
|
86 |
+
|
87 |
+
Notice that doubled letters represent separate occurrences of the sound in question; for example Anna is pronounced /anːa/ in Polish (the double n is often pronounced as a lengthened single n).
|
88 |
+
|
89 |
+
There are certain clusters where a written consonant would not be pronounced. For example, the ł in the words mógł ("could") and jabłko ("apple") might be omitted in ordinary speech, leading to the pronunciations muk and japko or jabko.
|
90 |
+
|
91 |
+
Polish is a highly fusional language with relatively free word order, although the dominant arrangement is subject–verb–object (SVO). There are no articles, and subject pronouns are often dropped.
|
92 |
+
|
93 |
+
Nouns belong to one of three genders: masculine, feminine and neuter. A distinction is also made between animate and inanimate masculine nouns in the singular, and between masculine personal and non-masculine-personal nouns in the plural. There are seven cases: nominative, genitive, dative, accusative, instrumental, locative and vocative.
|
94 |
+
|
95 |
+
Adjectives agree with nouns in terms of gender, case, and number. Attributive adjectives most commonly precede the noun, although in certain cases, especially in fixed phrases (like język polski, "Polish (language)"), the noun may come first; the rule of thumb is that generic descriptive adjective normally precedes (e.g. piękny kwiat, “beautiful flower”) while categorising adjective often follows the noun (e.g. węgiel kamienny, “black
|
96 |
+
coal”). Most short adjectives and their derived adverbs form comparatives and superlatives by inflection (the superlative is formed by prefixing naj- to the comparative).
|
97 |
+
|
98 |
+
Verbs are of imperfective or perfective aspect, often occurring in pairs. Imperfective verbs have a present tense, past tense, compound future tense (except for być "to be", which has a simple future będę etc., this in turn being used to form the compound future of other verbs), subjunctive/conditional (formed with the detachable particle by), imperatives, an infinitive, present participle, present gerund and past participle. Perfective verbs have a simple future tense (formed like the present tense of imperfective verbs), past tense, subjunctive/conditional, imperatives, infinitive, present gerund and past participle. Conjugated verb forms agree with their subject in terms of person, number, and (in the case of past tense and subjunctive/conditional forms) gender.
|
99 |
+
|
100 |
+
Passive-type constructions can be made using the auxiliary być or zostać ("become") with the passive participle. There is also an impersonal construction where the active verb is used (in third person singular) with no subject, but with the reflexive pronoun się present to indicate a general, unspecified subject (as in pije się wódkę "vodka is being drunk"—note that wódka appears in the accusative). A similar sentence type in the past tense uses the passive participle with the ending -o, as in widziano ludzi ("people were seen"). As in other Slavic languages, there are also subjectless sentences formed using such words as można ("it is possible") together with an infinitive.
|
101 |
+
|
102 |
+
Yes-no questions (both direct and indirect) are formed by placing the word czy at the start. Negation uses the word nie, before the verb or other item being negated; nie is still added before the verb even if the sentence also contains other negatives such as nigdy ("never") or nic ("nothing"), effectively creating a double negative.
|
103 |
+
|
104 |
+
Cardinal numbers have a complex system of inflection and agreement. Zero and cardinal numbers higher than five (except for those ending with the digit 2, 3 or 4 but not ending with 12, 13 or 14) govern the genitive case rather than the nominative or accusative. Special forms of numbers (collective numerals) are used with certain classes of noun, which include dziecko ("child") and exclusively plural nouns such as drzwi ("door").
|
105 |
+
|
106 |
+
Polish has, over the centuries, borrowed a number of words from other languages. When borrowing, pronunciation was adapted to Polish phonemes and spelling was altered to match Polish orthography. In addition, word endings are liberally applied to almost any word to produce verbs, nouns, adjectives, as well as adding the appropriate endings for cases of nouns, adjectives, diminutives, double-diminutives, augmentatives, etc.
|
107 |
+
|
108 |
+
Depending on the historical period, borrowing has proceeded from various languages. Notable influences have been Latin (10th–18th centuries),[21] Czech (10th and 14th–15th centuries), Italian (16th–17th centuries),[21] French (17th–19th centuries),[21] German (13–15th and 18th–20th centuries), Hungarian (15th–16th centuries)[21] and Turkish (17th century). Currently, English words are the most common imports to Polish.[54]
|
109 |
+
|
110 |
+
The Latin language, for a very long time the only official language of the Polish state, has had a great influence on Polish. Many Polish words were direct borrowings or calques (e.g. rzeczpospolita from res publica) from Latin. Latin was known to a larger or smaller degree by most of the numerous szlachta in the 16th to 18th centuries (and it continued to be extensively taught at secondary schools until World War II). Apart from dozens of loanwords, its influence can also be seen in a number of verbatim Latin phrases in Polish literature (especially from the 19th century and earlier).
|
111 |
+
|
112 |
+
During the 12th and 13th centuries, Mongolian words were brought to the Polish language during wars with the armies of Genghis Khan and his descendants, e.g. dzida (spear) and szereg (a line or row).[54]
|
113 |
+
|
114 |
+
Words from Czech, an important influence during the 10th and 14th–15th centuries include sejm, hańba and brama.[54]
|
115 |
+
|
116 |
+
In 1518, the Polish king Sigismund I the Old married Bona Sforza, the niece of the Holy Roman emperor Maximilian, who introduced Italian cuisine to Poland, especially vegetables.[55] Hence, words from Italian include pomidor from "pomodoro" (tomato), kalafior from "cavolfiore" (cauliflower), and pomarańcza, a portmanteau from Italian "pomo" (pome) plus "arancio" (orange). A later word of Italian origin is autostrada (from Italian "autostrada", highway).[55]
|
117 |
+
|
118 |
+
In the 18th century, with the rising prominence of France in Europe, French supplanted Latin as an important source of words. Some French borrowings also date from the Napoleonic era, when the Poles were enthusiastic supporters of Napoleon. Examples include ekran (from French "écran", screen), abażur ("abat-jour", lamp shade), rekin ("requin", shark), meble ("meuble", furniture), bagaż ("bagage", luggage), walizka ("valise", suitcase), fotel ("fauteuil", armchair), plaża ("plage", beach) and koszmar ("cauchemar", nightmare). Some place names have also been adapted from French, such as the Warsaw borough of Żoliborz ("joli bord" = beautiful riverside), as well as the town of Żyrardów (from the name Girard, with the Polish suffix -ów attached to refer to the founder of the town).[56]
|
119 |
+
|
120 |
+
Many words were borrowed from the German language from the sizable German population in Polish cities during medieval times. German words found in the Polish language are often connected with trade, the building industry, civic rights and city life. Some words were assimilated verbatim, for example handel (trade) and dach (roof); others are pronounced the same, but differ in writing schnur—sznur (cord). As a result of being neighbours with Germany, Polish has many German expressions which have become literally translated (calques). The regional dialects of Upper Silesia and Masuria (Modern Polish East Prussia) have noticeably more German loanwords than other varieties.
|
121 |
+
|
122 |
+
The contacts with Ottoman Turkey in the 17th century brought many new words, some of them still in use, such as: jar ("yar" deep valley), szaszłyk ("şişlik" shish kebab), filiżanka ("fincan" cup), arbuz ("karpuz" watermelon), dywan ("divan" carpet),[57] etc.
|
123 |
+
|
124 |
+
From the founding of the Kingdom of Poland in 1025 through the early years of the Polish-Lithuanian Commonwealth created in 1569, Poland was the most tolerant country of Jews in Europe. Known as the "paradise for the Jews",[58][59] it became a shelter for persecuted and expelled European Jewish communities and the home to the world's largest Jewish community of the time. As a result, many Polish words come from Yiddish, spoken by the large Polish Jewish population that existed until the Holocaust. Borrowed Yiddish words include bachor (an unruly boy or child), bajzel (slang for mess), belfer (slang for teacher), ciuchy (slang for clothing), cymes (slang for very tasty food), geszeft (slang for business), kitel (slang for apron), machlojka (slang for scam), mamona (money), manele (slang for oddments), myszygene (slang for lunatic), pinda (slang for girl, pejoratively), plajta (slang for bankruptcy), rejwach (noise), szmal (slang for money), and trefny (dodgy).[60]
|
125 |
+
|
126 |
+
The mountain dialects of the Górale in southern Poland, have quite a number of words borrowed from Hungarian (e.g. baca, gazda, juhas, hejnał) and Romanian as a result of historical contacts with Hungarian-dominated Slovakia and Wallachian herders who travelled north along the Carpathians.[61]
|
127 |
+
|
128 |
+
Thieves' slang includes such words as kimać (to sleep) or majcher (knife) of Greek origin, considered then unknown to the outside world.[62]
|
129 |
+
|
130 |
+
In addition, Turkish and Tatar have exerted influence upon the vocabulary of war, names of oriental costumes etc.[21] Russian borrowings began to make their way into Polish from the second half of the 19th century on.[21]
|
131 |
+
|
132 |
+
Polish has also received an intensive number of English loanwords, particularly after World War II.[21] Recent loanwords come primarily from the English language, mainly those that have Latin or Greek roots, for example komputer (computer), korupcja (from 'corruption', but sense restricted to 'bribery') etc. Concatenation of parts of words (e.g. auto-moto), which is not native to Polish but common in English, for example, is also sometimes used. When borrowing English words, Polish often changes their spelling. For example, Latin suffix '-tio' corresponds to -cja. To make the word plural, -cja becomes -cje. Examples of this include inauguracja (inauguration), dewastacja (devastation), recepcja (reception), konurbacja (conurbation) and konotacje (connotations). Also, the digraph qu becomes kw (kwadrant = quadrant; kworum = quorum).
|
133 |
+
|
134 |
+
The Polish language has influenced others. Particular influences appear in other Slavic languages and in German — due to their proximity and shared borders.[63] Examples of loanwords include German Grenze (border),[64] Dutch and Afrikaans grens from Polish granica; German Peitzker from Polish piskorz (weatherfish); German Zobel, French zibeline, Swedish sobel, and English sable from Polish soból; and ogonek ("little tail") — the word describing a diacritic hook-sign added below some letters in various alphabets. "Szmata," a Polish, Slovak and Ruthenian word for "mop" or "rag", became part of Yiddish. The Polish language exerted significant lexical influence upon Ukrainian, particularly in the fields of abstract and technical terminology; for example, the Ukrainian word панство panstvo (country) is derived from Polish państwo.[23] The extent of Polish influence is particularly noticeable in Western Ukrainian dialects.[23]
|
135 |
+
|
136 |
+
There is a substantial number of Polish words which officially became part of Yiddish, once the main language of European Jews. These include basic items, objects or terms such as a bread bun (Polish bułka, Yiddish בולקע bulke), a fishing rod (wędka, ווענטקע ventke), an oak (dąb, דעמב demb), a meadow (łąka, לאָנקע lonke), a moustache (wąsy, וואָנצעס vontses) and a bladder (pęcherz, פּענכער penkher).[65]
|
137 |
+
|
138 |
+
Quite a few culinary loanwords exist in German and in other languages, some of which describe distinctive features of Polish cuisine. These include German and English Quark from twaróg (a kind of fresh cheese) and German Gurke, English gherkin from ogórek (cucumber). The word pierogi (Polish dumplings) has spread internationally, as well as pączki (Polish donuts)[66] and kiełbasa (sausage, e.g. kolbaso in Esperanto). As far as pierogi concerned, the original Polish word is already in plural (sing. pieróg, plural pierogi; stem pierog-, plural ending -i; NB. o becomes ó in a closed syllable, like here in singular), yet it is commonly used with the English plural ending -s in Canada and United States of America, pierogis, thus making it a "double plural". A similar situation happened with the Polish loanword from English czipsy ("potato chips")—from English chips being already plural in the original (chip + -s), yet it has obtained the Polish plural ending -y.
|
139 |
+
|
140 |
+
The word spruce entered the English language from the Polish name of Prusy (a historical region, today part of Poland). It became spruce because in Polish, z Prus, sounded like "spruce" in English (transl. "from Prussia") and was a generic term for commodities brought to England by Hanseatic merchants and because the tree was believed to have come from Polish Ducal Prussia.[67] However, it can be argued that the word is actually derived from the Old French term Pruce, meaning literally Prussia.[68]
|
en/471.html.txt
ADDED
@@ -0,0 +1,49 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
An autobiography (from the Greek, αὐτός-autos self + βίος-bios life + γράφειν-graphein to write; also informally called an autobio[1]) is a self-written account of the life of oneself. The word "autobiography" was first used deprecatingly by William Taylor in 1797 in the English periodical The Monthly Review, when he suggested the word as a hybrid, but condemned it as "pedantic". However, its next recorded use was in its present sense, by Robert Southey in 1809.[2] Despite only being named early in the nineteenth century, first-person autobiographical writing originates in antiquity. Roy Pascal differentiates autobiography from the periodic self-reflective mode of journal or diary writing by noting that "[autobiography] is a review of a life from a particular moment in time, while the diary, however reflective it may be, moves through a series of moments in time".[3] Autobiography thus takes stock of the autobiographer's life from the moment of composition. While biographers generally rely on a wide variety of documents and viewpoints, autobiography may be based entirely on the writer's memory. The memoir form is closely associated with autobiography but it tends, as Pascal claims, to focus less on the self and more on others during the autobiographer's review of his or her life.[3]
|
4 |
+
|
5 |
+
Autobiographical works are by nature subjective. The inability—or unwillingness—of the author to accurately recall memories has in certain cases resulted in misleading or incorrect information. Some sociologists and psychologists have noted that autobiography offers the author the ability to recreate history.
|
6 |
+
|
7 |
+
Spiritual autobiography is an account of an author's struggle or journey towards God, followed by conversion a religious conversion, often interrupted by moments of regression. The author re-frames his or her life as a demonstration of divine intention through encounters with the Divine. The earliest example of a spiritual autobiography is Augustine's Confessions though the tradition has expanded to include other religious traditions in works such as Zahid Rohari's An Autobiography and Black Elk Speaks. The spiritual autobiography works as an endorsement of his or her religion.
|
8 |
+
|
9 |
+
A memoir is slightly different in character from an autobiography. While an autobiography typically focuses on the "life and times" of the writer, a memoir has a narrower, more intimate focus on his or her own memories, feelings and emotions. Memoirs have often been written by politicians or military leaders as a way to record and publish an account of their public exploits. One early example is that of Julius Caesar's Commentarii de Bello Gallico, also known as Commentaries on the Gallic Wars. In the work, Caesar describes the battles that took place during the nine years that he spent fighting local armies in the Gallic Wars. His second memoir, Commentarii de Bello Civili (or Commentaries on the Civil War) is an account of the events that took place between 49 and 48 BC in the civil war against Gnaeus Pompeius and the Senate.
|
10 |
+
|
11 |
+
Leonor López de Córdoba (1362–1420) wrote what is supposed to be the first autobiography in Spanish. The English Civil War (1642–1651) provoked a number of examples of this genre, including works by Sir Edmund Ludlow and Sir John Reresby. French examples from the same period include the memoirs of Cardinal de Retz (1614–1679) and the Duc de Saint-Simon.
|
12 |
+
|
13 |
+
The term "fictional autobiography" signifies novels about a fictional character written as though the character were writing their own autobiography, meaning that the character is the first-person narrator and that the novel addresses both internal and external experiences of the character. Daniel Defoe's Moll Flanders is an early example. Charles Dickens' David Copperfield is another such classic, and J.D. Salinger's The Catcher in the Rye is a well-known modern example of fictional autobiography. Charlotte Brontë's Jane Eyre is yet another example of fictional autobiography, as noted on the front page of the original version. The term may also apply to works of fiction purporting to be autobiographies of real characters, e.g., Robert Nye's Memoirs of Lord Byron.
|
14 |
+
|
15 |
+
In antiquity such works were typically entitled apologia, purporting to be self-justification rather than self-documentation. John Henry Newman's Christian confessional work (first published in 1864) is entitled Apologia Pro Vita Sua in reference to this tradition.
|
16 |
+
|
17 |
+
The Jewish historian Flavius Josephus introduces his autobiography (Josephi Vita, c. 99) with self-praise, which is followed by a justification of his actions as a Jewish rebel commander of Galilee.[4]
|
18 |
+
|
19 |
+
The pagan rhetor Libanius (c. 314–394) framed his life memoir (Oration I begun in 374) as one of his orations, not of a public kind, but of a literary kind that could not be aloud in privacy.
|
20 |
+
|
21 |
+
Augustine (354–430) applied the title Confessions to his autobiographical work, and Jean-Jacques Rousseau used the same title in the 18th century, initiating the chain of confessional and sometimes racy and highly self-critical, autobiographies of the Romantic era and beyond. Augustine's was arguably the first Western autobiography ever written, and became an influential model for Christian writers throughout the Middle Ages. It tells of the hedonistic lifestyle Augustine lived for a time within his youth, associating with young men who boasted of their sexual exploits; his following and leaving of the anti-sex and anti-marriage Manichaeism in attempts to seek sexual morality; and his subsequent return to Christianity due to his embracement of Skepticism and the New Academy movement (developing the view that sex is good, and that virginity is better, comparing the former to silver and the latter to gold; Augustine's views subsequently strongly influenced Western theology[5]). Confessions will always rank among the great masterpieces of western literature.[6]
|
22 |
+
|
23 |
+
In the spirit of Augustine's Confessions is the 12th-century Historia Calamitatum of Peter Abelard, outstanding as an autobiographical document of its period.
|
24 |
+
|
25 |
+
In the 15th century, Leonor López de Córdoba, a Spanish noblewoman, wrote her Memorias, which may be the first autobiography in Castillian.
|
26 |
+
|
27 |
+
Zāhir ud-Dīn Mohammad Bābur, who founded the Mughal dynasty of South Asia kept a journal Bāburnāma (Chagatai/Persian: بابر نامہ; literally: "Book of Babur" or "Letters of Babur") which was written between 1493 and 1529.
|
28 |
+
|
29 |
+
One of the first great autobiographies of the Renaissance is that of the sculptor and goldsmith Benvenuto Cellini (1500–1571), written between 1556 and 1558, and entitled by him simply Vita (Italian: Life). He declares at the start: "No matter what sort he is, everyone who has to his credit what are or really seem great achievements, if he cares for truth and goodness, ought to write the story of his own life in his own hand; but no one should venture on such a splendid undertaking before he is over forty."[7] These criteria for autobiography generally persisted until recent times, and most serious autobiographies of the next three hundred years conformed to them.
|
30 |
+
|
31 |
+
Another autobiography of the period is De vita propria, by the Italian mathematician, physician and astrologer Gerolamo Cardano (1574).
|
32 |
+
|
33 |
+
The earliest known autobiography written in English is the Book of Margery Kempe, written in 1438.[8] Following in the earlier tradition of a life story told as an act of Christian witness, the book describes Margery Kempe's pilgrimages to the Holy Land and Rome, her attempts to negotiate a celibate marriage with her husband, and most of all her religious experiences as a Christian mystic. Extracts from the book were published in the early sixteenth century but the whole text was published for the first time only in 1936.[9]
|
34 |
+
|
35 |
+
Possibly the first publicly available autobiography written in English was Captain John Smith's autobiography published in 1630[10] which was regarded by many as not much more than a collection of tall tales told by someone of doubtful veracity. This changed with the publication of Philip Barbour's definitive biography in 1964 which, amongst other things, established independent factual bases for many of Smith's "tall tales", many of which could not have been known by Smith at the time of writing unless he was actually present at the events recounted.[11]
|
36 |
+
|
37 |
+
Other notable English autobiographies of the 17th century include those of Lord Herbert of Cherbury (1643, published 1764) and John Bunyan (Grace Abounding to the Chief of Sinners, 1666).
|
38 |
+
|
39 |
+
Jarena Lee (1783–1864) was the first African American woman to have a published biography in the United States.[12]
|
40 |
+
|
41 |
+
Following the trend of Romanticism, which greatly emphasized the role and the nature of the individual, and in the footsteps of Jean-Jacques Rousseau's Confessions, a more intimate form of autobiography, exploring the subject's emotions, came into fashion. Stendhal's autobiographical writings of the 1830s, The Life of Henry Brulard and Memoirs of an Egotist, are both avowedly influenced by Rousseau.[13] An English example is William Hazlitt's Liber Amoris (1823), a painful examination of the writer's love-life.
|
42 |
+
|
43 |
+
With the rise of education, cheap newspapers and cheap printing, modern concepts of fame and celebrity began to develop, and the beneficiaries of this were not slow to cash in on this by producing autobiographies. It became the expectation—rather than the exception—that those in the public eye should write about themselves—not only writers such as Charles Dickens (who also incorporated autobiographical elements in his novels) and Anthony Trollope, but also politicians (e.g. Henry Brooks Adams), philosophers (e.g. John Stuart Mill), churchmen such as Cardinal Newman, and entertainers such as P. T. Barnum. Increasingly, in accordance with romantic taste, these accounts also began to deal, amongst other topics, with aspects of childhood and upbringing—far removed from the principles of "Cellinian" autobiography.
|
44 |
+
|
45 |
+
From the 17th century onwards, "scandalous memoirs" by supposed libertines, serving a public taste for titillation, have been frequently published. Typically pseudonymous, they were (and are) largely works of fiction written by ghostwriters. So-called "autobiographies" of modern professional athletes and media celebrities—and to a lesser extent about politicians—generally written by a ghostwriter, are routinely published. Some celebrities, such as Naomi Campbell, admit to not having read their "autobiographies".[citation needed] Some sensationalist autobiographies such as James Frey's A Million Little Pieces have been publicly exposed as having embellished or fictionalized significant details of the authors' lives.
|
46 |
+
|
47 |
+
Autobiography has become an increasingly popular and widely accessible form. A Fortunate Life by Albert Facey (1979) has become an Australian literary classic.[14] With the critical and commercial success in the United States of such memoirs as Angela’s Ashes and The Color of Water, more and more people have been encouraged to try their hand at this genre. Maggie Nelson's book The Argonauts is one of the recent autobiographies. Maggie Nelson calls it "autotheory"—a combination of autobiography and critical theory.[15]
|
48 |
+
|
49 |
+
A genre where the "claim for truth" overlaps with fictional elements though the work still purports to be autobiographical is autofiction.
|
en/4710.html.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
en/4711.html.txt
ADDED
@@ -0,0 +1,126 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Pollution is the introduction of contaminants into the natural environment that cause adverse change.[1] Pollution can take the form of chemical substances or energy, such as noise, heat or light. Pollutants, the components of pollution, can be either foreign substances/energies or naturally occurring contaminants. Pollution is often classed as point source or nonpoint source pollution. In 2015, pollution killed 9 million people in the world.[2][3]
|
4 |
+
|
5 |
+
Major forms of pollution include: Air pollution, light pollution, littering, noise pollution, plastic pollution, soil contamination, radioactive contamination, thermal pollution, visual pollution, water pollution.
|
6 |
+
|
7 |
+
Air pollution has always accompanied civilizations. Pollution started from prehistoric times, when man created the first fires. According to a 1983 article in the journal Science, "soot" found on ceilings of prehistoric caves provides ample evidence of the high levels of pollution that was associated with inadequate ventilation of open fires."[4] Metal forging appears to be a key turning point in the creation of significant air pollution levels outside the home. Core samples of glaciers in Greenland indicate increases in pollution associated with Greek, Roman, and Chinese metal production.[5]
|
8 |
+
|
9 |
+
The burning of coal and wood, and the presence of many horses in concentrated areas made the cities the primary sources of pollution. The Industrial Revolution brought an infusion of untreated chemicals and wastes into local streams that served as the water supply. King Edward I of England banned the burning of sea-coal by proclamation in London in 1272, after its smoke became a problem;[6][7] the fuel was so common in England that this earliest of names for it was acquired because it could be carted away from some shores by the wheelbarrow.
|
10 |
+
|
11 |
+
It was the Industrial Revolution that gave birth to environmental pollution as we know it today. London also recorded one of the earlier extreme cases of water quality problems with the Great Stink on the Thames of 1858, which led to construction of the London sewerage system soon afterward. Pollution issues escalated as population growth far exceeded viability of neighborhoods to handle their waste problem. Reformers began to demand sewer systems and clean water.[8]
|
12 |
+
|
13 |
+
In 1870, the sanitary conditions in Berlin were among the worst in Europe. August Bebel recalled conditions before a modern sewer system was built in the late 1870s:
|
14 |
+
|
15 |
+
Waste-water from the houses collected in the gutters running alongside the curbs and emitted a truly fearsome smell. There were no public toilets in the streets or squares. Visitors, especially women, often became desperate when nature called. In the public buildings the sanitary facilities were unbelievably primitive....As a metropolis, Berlin did not emerge from a state of barbarism into civilization until after 1870."[9]
|
16 |
+
|
17 |
+
The primitive conditions were intolerable for a world national capital, and the Imperial German government brought in its scientists, engineers, and urban planners to not only solve the deficiencies, but to forge Berlin as the world's model city. A British expert in 1906 concluded that Berlin represented "the most complete application of science, order and method of public life," adding "it is a marvel of civic administration, the most modern and most perfectly organized city that there is."[10]
|
18 |
+
|
19 |
+
The emergence of great factories and consumption of immense quantities of coal gave rise to unprecedented air pollution and the large volume of industrial chemical discharges added to the growing load of untreated human waste. Chicago and Cincinnati were the first two American cities to enact laws ensuring cleaner air in 1881. Pollution became a major issue in the United States in the early twentieth century, as progressive reformers took issue with air pollution caused by coal burning, water pollution caused by bad sanitation, and street pollution caused by the 3 million horses who worked in American cities in 1900, generating large quantities of urine and manure. As historian Martin Melosi notes, the generation that first saw automobiles replacing the horses saw cars as "miracles of cleanliness".[11] By the 1940s, however, automobile-caused smog was a major issue in Los Angeles.[12]
|
20 |
+
|
21 |
+
Other cities followed around the country until early in the 20th century, when the short lived Office of Air Pollution was created under the Department of the Interior. Extreme smog events were experienced by the cities of Los Angeles and Donora, Pennsylvania in the late 1940s, serving as another public reminder.[13]
|
22 |
+
|
23 |
+
Air pollution would continue to be a problem in England, especially later during the industrial revolution, and extending into the recent past with the Great Smog of 1952. Awareness of atmospheric pollution spread widely after World War II, with fears triggered by reports of radioactive fallout from atomic warfare and testing.[14] Then a non-nuclear event – the Great Smog of 1952 in London – killed at least 4000 people.[15] This prompted some of the first major modern environmental legislation: the Clean Air Act of 1956.
|
24 |
+
|
25 |
+
Pollution began to draw major public attention in the United States between the mid-1950s and early 1970s, when Congress passed the Noise Control Act, the Clean Air Act, the Clean Water Act, and the National Environmental Policy Act.[16]
|
26 |
+
|
27 |
+
Severe incidents of pollution helped increase consciousness. PCB dumping in the Hudson River resulted in a ban by the EPA on consumption of its fish in 1974. National news stories in the late 1970s – especially the long-term dioxin contamination at Love Canal starting in 1947 and uncontrolled dumping in Valley of the Drums – led to the Superfund legislation of 1980.[17] The pollution of industrial land gave rise to the name brownfield, a term now common in city planning.
|
28 |
+
|
29 |
+
The development of nuclear science introduced radioactive contamination, which can remain lethally radioactive for hundreds of thousands of years. Lake Karachay – named by the Worldwatch Institute as the "most polluted spot" on earth – served as a disposal site for the Soviet Union throughout the 1950s and 1960s. Chelyabinsk, Russia, is considered the "Most polluted place on the planet".[18]
|
30 |
+
|
31 |
+
Nuclear weapons continued to be tested in the Cold War, especially in the earlier stages of their development. The toll on the worst-affected populations and the growth since then in understanding about the critical threat to human health posed by radioactivity has also been a prohibitive complication associated with nuclear power. Though extreme care is practiced in that industry, the potential for disaster suggested by incidents such as those at Three Mile Island, Chernobyl, and Fukushima pose a lingering specter of public mistrust. Worldwide publicity has been intense on those disasters.[19] Widespread support for test ban treaties has ended almost all nuclear testing in the atmosphere.[20]
|
32 |
+
|
33 |
+
International catastrophes such as the wreck of the Amoco Cadiz oil tanker off the coast of Brittany in 1978 and the Bhopal disaster in 1984 have demonstrated the universality of such events and the scale on which efforts to address them needed to engage. The borderless nature of atmosphere and oceans inevitably resulted in the implication of pollution on a planetary level with the issue of global warming. Most recently the term persistent organic pollutant (POP) has come to describe a group of chemicals such as PBDEs and PFCs among others. Though their effects remain somewhat less well understood owing to a lack of experimental data, they have been detected in various ecological habitats far removed from industrial activity such as the Arctic, demonstrating diffusion and bioaccumulation after only a relatively brief period of widespread use.
|
34 |
+
|
35 |
+
A much more recently discovered problem is the Great Pacific Garbage Patch, a huge concentration of plastics, chemical sludge and other debris which has been collected into a large area of the Pacific Ocean by the North Pacific Gyre. This is a less well known pollution problem than the others described above, but nonetheless has multiple and serious consequences such as increasing wildlife mortality, the spread of invasive species and human ingestion of toxic chemicals. Organizations such as 5 Gyres have researched the pollution and, along with artists like Marina DeBris, are working toward publicizing the issue.
|
36 |
+
|
37 |
+
Pollution introduced by light at night is becoming a global problem, more severe in urban centres, but nonetheless contaminating also large territories, far away from towns.[21]
|
38 |
+
|
39 |
+
Growing evidence of local and global pollution and an increasingly informed public over time have given rise to environmentalism and the environmental movement, which generally seek to limit human impact on the environment.
|
40 |
+
|
41 |
+
The major forms of pollution are listed below along with the particular contaminant relevant to each of them:
|
42 |
+
|
43 |
+
A pollutant is a waste material that pollutes air, water, or soil. Three factors determine the severity of a pollutant: its chemical nature, the concentration, the area affected and the persistence.
|
44 |
+
|
45 |
+
Pollution has a cost.[23][24][25] Manufacturing activities that cause air pollution impose health and clean-up costs on the whole of society, whereas the neighbors of an individual who chooses to fire-proof his home may benefit from a reduced risk of a fire spreading to their own homes. A manufacturing activity that causes air pollution is an example of a negative externality in production. A negative externality in production occurs “when a firm’s production reduces the well-being of others who are not compensated by the firm."[26] For example, if a laundry firm exists near a polluting steel manufacturing firm, there will be increased costs for the laundry firm because of the dirt and smoke produced by the steel manufacturing firm.[27] If external costs exist, such as those created by pollution, the manufacturer will choose to produce more of the product than would be produced if the manufacturer were required to pay all associated environmental costs. Because responsibility or consequence for self-directed action lies partly outside the self, an element of externalization is involved. If there are external benefits, such as in public safety, less of the good may be produced than would be the case if the producer were to receive payment for the external benefits to others. However, goods and services that involve negative externalities in production, such as those that produce pollution, tend to be over-produced and underpriced since the externality is not being priced into the market.[26]
|
46 |
+
|
47 |
+
Pollution can also create costs for the firms producing the pollution. Sometimes firms choose, or are forced by regulation, to reduce the amount of pollution that they are producing. The associated costs of doing this are called abatement costs, or marginal abatement costs if measured by each additional unit.[28] In 2005 pollution abatement capital expenditures and operating costs in the US amounted to nearly $27 billion.[29]
|
48 |
+
|
49 |
+
Society derives some indirect utility from pollution, otherwise there would be no incentive to pollute. This utility comes from the consumption of goods and services that create pollution. Therefore, it is important that policymakers attempt to balance these indirect benefits with the costs of pollution in order to achieve an efficient outcome.[30]
|
50 |
+
|
51 |
+
It is possible to use environmental economics to determine which level of pollution is deemed the social optimum. For economists, pollution is an “external cost and occurs only when one or more individuals suffer a loss of welfare,” however, there exists a socially optimal level of pollution at which welfare is maximized.[31] This is because consumers derive utility from the good or service manufactured, which will outweigh the social cost of pollution until a certain point. At this point the damage of one extra unit of pollution to society, the marginal cost of pollution, is exactly equal to the marginal benefit of consuming one more unit of the good or service.[32]
|
52 |
+
|
53 |
+
In markets with pollution, or other negative externalities in production, the free market equilibrium will not account for the costs of pollution on society. If the social costs of pollution are higher than the private costs incurred by the firm, then the true supply curve will be higher. The point at which the social marginal cost and market demand intersect gives the socially optimal level of pollution. At this point, the quantity will be lower and the price will be higher in comparison to the free market equilibrium.[32] Therefore, the free market outcome could be considered a market failure because it “does not maximize efficiency”.[26]
|
54 |
+
|
55 |
+
This model can be used as a basis to evaluate different methods of internalizing the externality. Some examples include tariffs, a carbon tax and cap and trade systems.
|
56 |
+
|
57 |
+
Air pollution comes from both natural and human-made (anthropogenic) sources. However, globally human-made pollutants from combustion, construction, mining, agriculture and warfare are increasingly significant in the air pollution equation.[33]
|
58 |
+
|
59 |
+
Motor vehicle emissions are one of the leading causes of air pollution.[34][35][36] China, United States, Russia, India[37] Mexico, and Japan are the world leaders in air pollution emissions. Principal stationary pollution sources include chemical plants, coal-fired power plants, oil refineries,[38] petrochemical plants, nuclear waste disposal activity, incinerators, large livestock farms (dairy cows, pigs, poultry, etc.), PVC factories, metals production factories, plastics factories, and other heavy industry. Agricultural air pollution comes from contemporary practices which include clear felling and burning of natural vegetation as well as spraying of pesticides and herbicides[39]
|
60 |
+
|
61 |
+
About 400 million metric tons of hazardous wastes are generated each year.[40] The United States alone produces about 250 million metric tons.[41] Americans constitute less than 5% of the world's population, but produce roughly 25% of the world's CO2,[42] and generate approximately 30% of world's waste.[43][44] In 2007, China overtook the United States as the world's biggest producer of CO2,[45] while still far behind based on per capita pollution (ranked 78th among the world's nations).[46]
|
62 |
+
|
63 |
+
In February 2007, a report by the Intergovernmental Panel on Climate Change (IPCC), representing the work of 2,500 scientists, economists, and policymakers from more than 120 countries, confirmed that humans have been the primary cause of global warming since 1950. Humans have ways to cut greenhouse gas emissions and avoid the consequences of global warming, a major climate report concluded. But to change the climate, the transition from fossil fuels like coal and oil needs to occur within decades, according to the final report this year from the UN's Intergovernmental Panel on Climate Change (IPCC).[47]
|
64 |
+
|
65 |
+
Some of the more common soil contaminants are chlorinated hydrocarbons (CFH), heavy metals (such as chromium, cadmium – found in rechargeable batteries, and lead – found in lead paint, aviation fuel and still in some countries, gasoline), MTBE, zinc, arsenic and benzene. In 2001 a series of press reports culminating in a book called Fateful Harvest unveiled a widespread practice of recycling industrial byproducts into fertilizer, resulting in the contamination of the soil with various metals. Ordinary municipal landfills are the source of many chemical substances entering the soil environment (and often groundwater), emanating from the wide variety of refuse accepted, especially substances illegally discarded there, or from pre-1970 landfills that may have been subject to little control in the U.S. or EU. There have also been some unusual releases of polychlorinated dibenzodioxins, commonly called dioxins for simplicity, such as TCDD.[48]
|
66 |
+
|
67 |
+
Pollution can also be the consequence of a natural disaster. For example, hurricanes often involve water contamination from sewage, and petrochemical spills from ruptured boats or automobiles. Larger scale and environmental damage is not uncommon when coastal oil rigs or refineries are involved. Some sources of pollution, such as nuclear power plants or oil tankers, can produce widespread and potentially hazardous releases when accidents occur.
|
68 |
+
|
69 |
+
In the case of noise pollution the dominant source class is the motor vehicle, producing about ninety percent of all unwanted noise worldwide.
|
70 |
+
|
71 |
+
Adverse air quality can kill many organisms, including humans. Ozone pollution can cause respiratory disease, cardiovascular disease, throat inflammation, chest pain, and congestion. Water pollution causes approximately 14,000 deaths per day, mostly due to contamination of drinking water by untreated sewage in developing countries. An estimated 500 million Indians have no access to a proper toilet,[52][53] Over ten million people in India fell ill with waterborne illnesses in 2013, and 1,535 people died, most of them children.[54] Nearly 500 million Chinese lack access to safe drinking water.[55] A 2010 analysis estimated that 1.2 million people died prematurely each year in China because of air pollution.[56] The high smog levels China has been facing for a long time can do damage to civilians' bodies and cause different diseases.[57] The WHO estimated in 2007 that air pollution causes half a million deaths per year in India.[58] Studies have estimated that the number of people killed annually in the United States could be over 50,000.[59]
|
72 |
+
|
73 |
+
Oil spills can cause skin irritations and rashes. Noise pollution induces hearing loss, high blood pressure, stress, and sleep disturbance. Mercury has been linked to developmental deficits in children and neurologic symptoms. Older people are majorly exposed to diseases induced by air pollution. Those with heart or lung disorders are at additional risk. Children and infants are also at serious risk. Lead and other heavy metals have been shown to cause neurological problems. Chemical and radioactive substances can cause cancer and as well as birth defects.
|
74 |
+
|
75 |
+
An October 2017 study by the Lancet Commission on Pollution and Health found that global pollution, specifically toxic air, water, soils and workplaces, kills nine million people annually, which is triple the number of deaths caused by AIDS, tuberculosis and malaria combined, and 15 times higher than deaths caused by wars and other forms of human violence.[60] The study concluded that "pollution is one of the great existential challenges of the Anthropocene era. Pollution endangers the stability of the Earth’s support systems and threatens the continuing survival of human societies."[3]
|
76 |
+
|
77 |
+
Pollution has been found to be present widely in the environment. There are a number of effects of this:
|
78 |
+
|
79 |
+
The Toxicology and Environmental Health Information Program (TEHIP)[61] at the United States National Library of Medicine (NLM) maintains a comprehensive toxicology and environmental health web site that includes access to resources produced by TEHIP and by other government agencies and organizations. This web site includes links to databases, bibliographies, tutorials, and other scientific and consumer-oriented resources. TEHIP also is responsible for the Toxicology Data Network (TOXNET)[62] an integrated system of toxicology and environmental health databases that are available free of charge on the web.
|
80 |
+
|
81 |
+
TOXMAP is a Geographic Information System (GIS) that is part of TOXNET. TOXMAP uses maps of the United States to help users visually explore data from the United States Environmental Protection Agency's (EPA) Toxics Release Inventory and Superfund Basic Research Programs.
|
82 |
+
|
83 |
+
A 2019 paper linked pollution to adverse school outcomes for children.[63]
|
84 |
+
|
85 |
+
A number of studies show that pollution has an adverse effect on the productivity of both indoor and outdoor workers.[64][65][66][67]
|
86 |
+
|
87 |
+
To protect the environment from the adverse effects of pollution, many nations worldwide have enacted legislation to regulate various types of pollution as well as to mitigate the adverse effects of pollution.
|
88 |
+
|
89 |
+
Pollution control is a term used in environmental management. It means the control of emissions and effluents into air, water or soil. Without pollution control, the waste products from overconsumption, heating, agriculture, mining, manufacturing, transportation and other human activities, whether they accumulate or disperse, will degrade the environment. In the hierarchy of controls, pollution prevention and waste minimization are more desirable than pollution control. In the field of land development, low impact development is a similar technique for the prevention of urban runoff.
|
90 |
+
|
91 |
+
The earliest precursor of pollution generated by life forms would have been a natural function of their existence. The attendant consequences on viability and population levels fell within the sphere of natural selection. These would have included the demise of a population locally or ultimately, species extinction. Processes that were untenable would have resulted in a new balance brought about by changes and adaptations. At the extremes, for any form of life, consideration of pollution is superseded by that of survival.
|
92 |
+
|
93 |
+
For humankind, the factor of technology is a distinguishing and critical consideration, both as an enabler and an additional source of byproducts. Short of survival, human concerns include the range from quality of life to health hazards. Since science holds experimental demonstration to be definitive, modern treatment of toxicity or environmental harm involves defining a level at which an effect is observable. Common examples of fields where practical measurement is crucial include automobile emissions control, industrial exposure (e.g. Occupational Safety and Health Administration (OSHA) PELs), toxicology (e.g. LD50), and medicine (e.g. medication and radiation doses).
|
94 |
+
|
95 |
+
"The solution to pollution is dilution", is a dictum which summarizes a traditional approach to pollution management whereby sufficiently diluted pollution is not harmful.[69][70] It is well-suited to some other modern, locally scoped applications such as laboratory safety procedure and hazardous material release emergency management. But it assumes that the diluent is in virtually unlimited supply for the application or that resulting dilutions are acceptable in all cases.
|
96 |
+
|
97 |
+
Such simple treatment for environmental pollution on a wider scale might have had greater merit in earlier centuries when physical survival was often the highest imperative, human population and densities were lower, technologies were simpler and their byproducts more benign. But these are often no longer the case. Furthermore, advances have enabled measurement of concentrations not possible before. The use of statistical methods in evaluating outcomes has given currency to the principle of probable harm in cases where assessment is warranted but resorting to deterministic models is impractical or infeasible. In addition, consideration of the environment beyond direct impact on human beings has gained prominence.
|
98 |
+
|
99 |
+
Yet in the absence of a superseding principle, this older approach predominates practices throughout the world. It is the basis by which to gauge concentrations of effluent for legal release, exceeding which penalties are assessed or restrictions applied. One such superseding principle is contained in modern hazardous waste laws in developed countries, as the process of diluting hazardous waste to make it non-hazardous is usually a regulated treatment process.[71] Migration from pollution dilution to elimination in many cases can be confronted by challenging economical and technological barriers.
|
100 |
+
|
101 |
+
Carbon dioxide, while vital for photosynthesis, is sometimes referred to as pollution, because raised levels of the gas in the atmosphere are affecting the Earth's climate. Disruption of the environment can also highlight the connection between areas of pollution that would normally be classified separately, such as those of water and air. Recent studies have investigated the potential for long-term rising levels of atmospheric carbon dioxide to cause slight but critical increases in the acidity of ocean waters, and the possible effects of this on marine ecosystems.
|
102 |
+
|
103 |
+
Air pollution fluctuations have been
|
104 |
+
known to strongly depend on the weather dynamics.
|
105 |
+
A recent study developed a multi-layered network analysis and detected strong
|
106 |
+
interlinks between the geopotential height of the upper air ( 5 km) and surface air pollution
|
107 |
+
in both China and the USA.[74] This study indicates that Rossby waves significantly affect air pollution fluctuations
|
108 |
+
through the development of cyclone and anticyclone systems, and further affect the
|
109 |
+
local stability of the air and the winds. The Rossby waves impact on air pollution
|
110 |
+
has been observed in the daily fluctuations in surface air pollution. Thus, the impact
|
111 |
+
of Rossby waves on human life is significant and rapid warming of
|
112 |
+
the Arctic could slow down Rossby waves, thus increasing human health risks.
|
113 |
+
|
114 |
+
The Pure Earth, an international non-for-profit organization dedicated to eliminating life-threatening pollution in the developing world, issues an annual list of some of the world's most polluting industries.[75]
|
115 |
+
|
116 |
+
A 2018 report by the Institute for Agriculture and Trade Policy and GRAIN says that the meat and dairy industries are poised to surpass the oil industry as the world's worst polluters.[76]
|
117 |
+
|
118 |
+
Pure Earth issues an annual list of some of the world's worst polluted places.[77]
|
119 |
+
|
120 |
+
Air pollution
|
121 |
+
|
122 |
+
Soil contamination
|
123 |
+
|
124 |
+
Water pollution
|
125 |
+
|
126 |
+
Other
|
en/4712.html.txt
ADDED
@@ -0,0 +1,102 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
Color (American English), or colour (Commonwealth English), is the characteristic of visual perception described through color categories, with names such as red, orange, yellow, green, blue, or purple. This perception of color derives from the stimulation of photoreceptor cells (in particular cone cells in the human eye and other vertebrate eyes) by electromagnetic radiation (in the visible spectrum in the case of humans). Color categories and physical specifications of color are associated with objects through the wavelengths of the light that is reflected from them and their intensities. This reflection is governed by the object's physical properties such as light absorption, emission spectra, etc.
|
6 |
+
|
7 |
+
By defining a color space, colors can be identified numerically by coordinates, which in 1931 were also named in global agreement with internationally agreed color names like mentioned above (red, orange, etc.) by the International Commission on Illumination. The RGB color space for instance is a color space corresponding to human trichromacy and to the three cone cell types that respond to three bands of light: long wavelengths, peaking near 564–580 nm (red); medium-wavelength, peaking near 534–545 nm (green); and short-wavelength light, near 420–440 nm (blue).[1][2] There may also be more than three color dimensions in other color spaces, such as in the CMYK color model, wherein one of the dimensions relates to a color's colorfulness).
|
8 |
+
|
9 |
+
The photo-receptivity of the "eyes" of other species also varies considerably from that of humans and so results in correspondingly different color perceptions that cannot readily be compared to one another. Honey bees and bumblebees for instance have trichromatic color vision sensitive to ultraviolet but is insensitive to red. Papilio butterflies possess six types of photoreceptors and may have pentachromatic vision.[3] The most complex color vision system in the animal kingdom has been found in stomatopods (such as the mantis shrimp) with up to 12 spectral receptor types thought to work as multiple dichromatic units.[4]
|
10 |
+
|
11 |
+
The science of color is sometimes called chromatics, colorimetry, or simply color science. It includes the study of the perception of color by the human eye and brain, the origin of color in materials, color theory in art, and the physics of electromagnetic radiation in the visible range (that is, what is commonly referred to simply as light).
|
12 |
+
|
13 |
+
Electromagnetic radiation is characterized by its wavelength (or frequency) and its intensity. When the wavelength is within the visible spectrum (the range of wavelengths humans can perceive, approximately from 390 nm to 700 nm), it is known as "visible light".
|
14 |
+
|
15 |
+
Most light sources emit light at many different wavelengths; a source's spectrum is a distribution giving its intensity at each wavelength. Although the spectrum of light arriving at the eye from a given direction determines the color sensation in that direction, there are many more possible spectral combinations than color sensations. In fact, one may formally define a color as a class of spectra that give rise to the same color sensation, although such classes would vary widely among different species, and to a lesser extent among individuals within the same species. In each such class the members are called metamers of the color in question. This effect can be visualized by comparing the light sources' spectral power distributions and the resulting colors.
|
16 |
+
|
17 |
+
The familiar colors of the rainbow in the spectrum—named using the Latin word for appearance or apparition by Isaac Newton in 1671—include all those colors that can be produced by visible light of a single wavelength only, the pure spectral or monochromatic colors. The table at right shows approximate frequencies (in terahertz) and wavelengths (in nanometers) for various pure spectral colors. The wavelengths listed are as measured in air or vacuum (see refractive index).
|
18 |
+
|
19 |
+
The color table should not be interpreted as a definitive list—the pure spectral colors form a continuous spectrum, and how it is divided into distinct colors linguistically is a matter of culture and historical contingency (although people everywhere have been shown to perceive colors in the same way[6]). A common list identifies six main bands: red, orange, yellow, green, blue, and violet. Newton's conception included a seventh color, indigo, between blue and violet. It is possible that what Newton referred to as blue is nearer to what today is known as cyan, and that indigo was simply the dark blue of the indigo dye that was being imported at the time.[7]
|
20 |
+
|
21 |
+
The intensity of a spectral color, relative to the context in which it is viewed, may alter its perception considerably; for example, a low-intensity orange-yellow is brown, and a low-intensity yellow-green is olive green.
|
22 |
+
|
23 |
+
The color of an object depends on both the physics of the object in its environment and the characteristics of the perceiving eye and brain. Physically, objects can be said to have the color of the light leaving their surfaces, which normally depends on the spectrum of the incident illumination and the reflectance properties of the surface, as well as potentially on the angles of illumination and viewing. Some objects not only reflect light, but also transmit light or emit light themselves, which also contributes to the color. A viewer's perception of the object's color depends not only on the spectrum of the light leaving its surface, but also on a host of contextual cues, so that color differences between objects can be discerned mostly independent of the lighting spectrum, viewing angle, etc. This effect is known as color constancy.
|
24 |
+
|
25 |
+
Some generalizations of the physics can be drawn, neglecting perceptual effects for now:
|
26 |
+
|
27 |
+
To summarize, the color of an object is a complex result of its surface properties, its transmission properties, and its emission properties, all of which contribute to the mix of wavelengths in the light leaving the surface of the object. The perceived color is then further conditioned by the nature of the ambient illumination, and by the color properties of other objects nearby, and via other characteristics of the perceiving eye and brain.
|
28 |
+
|
29 |
+
Although Aristotle and other ancient scientists had already written on the nature of light and color vision, it was not until Newton that light was identified as the source of the color sensation. In 1810, Goethe published his comprehensive Theory of Colors in which he ascribed physiological effects to color that are now understood as psychological.
|
30 |
+
|
31 |
+
In 1801 Thomas Young proposed his trichromatic theory, based on the observation that any color could be matched with a combination of three lights. This theory was later refined by James Clerk Maxwell and Hermann von Helmholtz. As Helmholtz puts it, "the principles of Newton's law of mixture were experimentally confirmed by Maxwell in 1856. Young's theory of color sensations, like so much else that this marvelous investigator achieved in advance of his time, remained unnoticed until Maxwell directed attention to it."[10]
|
32 |
+
|
33 |
+
At the same time as Helmholtz, Ewald Hering developed the opponent process theory of color, noting that color blindness and afterimages typically come in opponent pairs (red-green, blue-orange, yellow-violet, and black-white). Ultimately these two theories were synthesized in 1957 by Hurvich and Jameson, who showed that retinal processing corresponds to the trichromatic theory, while processing at the level of the lateral geniculate nucleus corresponds to the opponent theory.[11]
|
34 |
+
|
35 |
+
In 1931, an international group of experts known as the Commission internationale de l'éclairage (CIE) developed a mathematical color model, which mapped out the space of observable colors and assigned a set of three numbers to each.
|
36 |
+
|
37 |
+
The ability of the human eye to distinguish colors is based upon the varying sensitivity of different cells in the retina to light of different wavelengths. Humans are trichromatic—the retina contains three types of color receptor cells, or cones. One type, relatively distinct from the other two, is most responsive to light that is perceived as blue or blue-violet, with wavelengths around 450 nm; cones of this type are sometimes called short-wavelength cones or S cones (or misleadingly, blue cones). The other two types are closely related genetically and chemically: middle-wavelength cones, M cones, or green cones are most sensitive to light perceived as green, with wavelengths around 540 nm, while the long-wavelength cones, L cones, or red cones, are most sensitive to light that is perceived as greenish yellow, with wavelengths around 570 nm.
|
38 |
+
|
39 |
+
Light, no matter how complex its composition of wavelengths, is reduced to three color components by the eye. Each cone type adheres to the principle of univariance, which is that each cone's output is determined by the amount of light that falls on it over all wavelengths. For each location in the visual field, the three types of cones yield three signals based on the extent to which each is stimulated. These amounts of stimulation are sometimes called tristimulus values.
|
40 |
+
|
41 |
+
The response curve as a function of wavelength varies for each type of cone. Because the curves overlap, some tristimulus values do not occur for any incoming light combination. For example, it is not possible to stimulate only the mid-wavelength (so-called "green") cones; the other cones will inevitably be stimulated to some degree at the same time. The set of all possible tristimulus values determines the human color space. It has been estimated that humans can distinguish roughly 10 million different colors.[9]
|
42 |
+
|
43 |
+
The other type of light-sensitive cell in the eye, the rod, has a different response curve. In normal situations, when light is bright enough to strongly stimulate the cones, rods play virtually no role in vision at all.[12] On the other hand, in dim light, the cones are understimulated leaving only the signal from the rods, resulting in a colorless response. (Furthermore, the rods are barely sensitive to light in the "red" range.) In certain conditions of intermediate illumination, the rod response and a weak cone response can together result in color discriminations not accounted for by cone responses alone. These effects, combined, are summarized also in the Kruithof curve, that describes the change of color perception and pleasingness of light as function of temperature and intensity.
|
44 |
+
|
45 |
+
While the mechanisms of color vision at the level of the retina are well-described in terms of tristimulus values, color processing after that point is organized differently. A dominant theory of color vision proposes that color information is transmitted out of the eye by three opponent processes, or opponent channels, each constructed from the raw output of the cones: a red–green channel, a blue–yellow channel, and a black–white "luminance" channel. This theory has been supported by neurobiology, and accounts for the structure of our subjective color experience. Specifically, it explains why humans cannot perceive a "reddish green" or "yellowish blue", and it predicts the color wheel: it is the collection of colors for which at least one of the two color channels measures a value at one of its extremes.
|
46 |
+
|
47 |
+
The exact nature of color perception beyond the processing already described, and indeed the status of color as a feature of the perceived world or rather as a feature of our perception of the world—a type of qualia—is a matter of complex and continuing philosophical dispute.
|
48 |
+
|
49 |
+
If one or more types of a person's color-sensing cones are missing or less responsive than normal to incoming light, that person can distinguish fewer colors and is said to be color deficient or color blind (though this latter term can be misleading; almost all color deficient individuals can distinguish at least some colors). Some kinds of color deficiency are caused by anomalies in the number or nature of cones in the retina. Others (like central or cortical achromatopsia) are caused by neural anomalies in those parts of the brain where visual processing takes place.
|
50 |
+
|
51 |
+
While most humans are trichromatic (having three types of color receptors), many animals, known as tetrachromats, have four types. These include some species of spiders, most marsupials, birds, reptiles, and many species of fish. Other species are sensitive to only two axes of color or do not perceive color at all; these are called dichromats and monochromats respectively. A distinction is made between retinal tetrachromacy (having four pigments in cone cells in the retina, compared to three in trichromats) and functional tetrachromacy (having the ability to make enhanced color discriminations based on that retinal difference). As many as half of all women are retinal tetrachromats.[13]:p.256 The phenomenon arises when an individual receives two slightly different copies of the gene for either the medium- or long-wavelength cones, which are carried on the X chromosome. To have two different genes, a person must have two X chromosomes, which is why the phenomenon only occurs in women.[13] There is one scholarly report that confirms the existence of a functional tetrachromat.[14]
|
52 |
+
|
53 |
+
In certain forms of synesthesia/ideasthesia, perceiving letters and numbers (grapheme–color synesthesia) or hearing musical sounds (music–color synesthesia) will lead to the unusual additional experiences of seeing colors. Behavioral and functional neuroimaging experiments have demonstrated that these color experiences lead to changes in behavioral tasks and lead to increased activation of brain regions involved in color perception, thus demonstrating their reality, and similarity to real color percepts, albeit evoked through a non-standard route.
|
54 |
+
|
55 |
+
After exposure to strong light in their sensitivity range, photoreceptors of a given type become desensitized. For a few seconds after the light ceases, they will continue to signal less strongly than they otherwise would. Colors observed during that period will appear to lack the color component detected by the desensitized photoreceptors. This effect is responsible for the phenomenon of afterimages, in which the eye may continue to see a bright figure after looking away from it, but in a complementary color.
|
56 |
+
|
57 |
+
Afterimage effects have also been utilized by artists, including Vincent van Gogh.
|
58 |
+
|
59 |
+
When an artist uses a limited color palette, the eye tends to compensate by seeing any gray or neutral color as the color which is missing from the color wheel. For example, in a limited palette consisting of red, yellow, black, and white, a mixture of yellow and black will appear as a variety of green, a mixture of red and black will appear as a variety of purple, and pure gray will appear bluish.[15]
|
60 |
+
|
61 |
+
The trichromatic theory is strictly true when the visual system is in a fixed state of adaptation. In reality, the visual system is constantly adapting to changes in the environment and compares the various colors in a scene to reduce the effects of the illumination. If a scene is illuminated with one light, and then with another, as long as the difference between the light sources stays within a reasonable range, the colors in the scene appear relatively constant to us. This was studied by Edwin Land in the 1970s and led to his retinex theory of color constancy.
|
62 |
+
|
63 |
+
Both phenomena are readily explained and mathematically modeled with modern theories of chromatic adaptation and color appearance (e.g. CIECAM02, iCAM).[16] There is no need to dismiss the trichromatic theory of vision, but rather it can be enhanced with an understanding of how the visual system adapts to changes in the viewing environment.
|
64 |
+
|
65 |
+
Colors vary in several different ways, including hue (shades of red, orange, yellow, green, blue, and violet), saturation, brightness, and gloss. Some color words are derived from the name of an object of that color, such as "orange" or "salmon", while others are abstract, like "red".
|
66 |
+
|
67 |
+
In the 1969 study Basic Color Terms: Their Universality and Evolution, Brent Berlin and Paul Kay describe a pattern in naming "basic" colors (like "red" but not "red-orange" or "dark red" or "blood red", which are "shades" of red). All languages that have two "basic" color names distinguish dark/cool colors from bright/warm colors. The next colors to be distinguished are usually red and then yellow or green. All languages with six "basic" colors include black, white, red, green, blue, and yellow. The pattern holds up to a set of twelve: black, gray, white, pink, red, orange, yellow, green, blue, purple, brown, and azure (distinct from blue in Russian and Italian, but not English).
|
68 |
+
|
69 |
+
Colors, their meanings and associations can play major role in works of art, including literature.[17]
|
70 |
+
|
71 |
+
Individual colors have a variety of cultural associations such as national colors (in general described in individual color articles and color symbolism). The field of color psychology attempts to identify the effects of color on human emotion and activity. Chromotherapy is a form of alternative medicine attributed to various Eastern traditions. Colors have different associations in different countries and cultures.[18]
|
72 |
+
|
73 |
+
Different colors have been demonstrated to have effects on cognition. For example, researchers at the University of Linz in Austria demonstrated that the color red significantly decreases cognitive functioning in men.[19]
|
74 |
+
|
75 |
+
Most light sources are mixtures of various wavelengths of light. Many such sources can still effectively produce a spectral color, as the eye cannot distinguish them from single-wavelength sources. For example, most computer displays reproduce the spectral color orange as a combination of red and green light; it appears orange because the red and green are mixed in the right proportions to allow the eye's cones to respond the way they do to the spectral color orange.
|
76 |
+
|
77 |
+
A useful concept in understanding the perceived color of a non-monochromatic light source is the dominant wavelength, which identifies the single wavelength of light that produces a sensation most similar to the light source. Dominant wavelength is roughly akin to hue.
|
78 |
+
|
79 |
+
There are many color perceptions that by definition cannot be pure spectral colors due to desaturation or because they are purples (mixtures of red and violet light, from opposite ends of the spectrum). Some examples of necessarily non-spectral colors are the achromatic colors (black, gray, and white) and colors such as pink, tan, and magenta.
|
80 |
+
|
81 |
+
Two different light spectra that have the same effect on the three color receptors in the human eye will be perceived as the same color. They are metamers of that color. This is exemplified by the white light emitted by fluorescent lamps, which typically has a spectrum of a few narrow bands, while daylight has a continuous spectrum. The human eye cannot tell the difference between such light spectra just by looking into the light source, although reflected colors from objects can look different. (This is often exploited; for example, to make fruit or tomatoes look more intensely red.)
|
82 |
+
|
83 |
+
Similarly, most human color perceptions can be generated by a mixture of three colors called primaries. This is used to reproduce color scenes in photography, printing, television, and other media. There are a number of methods or color spaces for specifying a color in terms of three particular primary colors. Each method has its advantages and disadvantages depending on the particular application.
|
84 |
+
|
85 |
+
No mixture of colors, however, can produce a response truly identical to that of a spectral color, although one can get close, especially for the longer wavelengths, where the CIE 1931 color space chromaticity diagram has a nearly straight edge. For example, mixing green light (530 nm) and blue light (460 nm) produces cyan light that is slightly desaturated, because response of the red color receptor would be greater to the green and blue light in the mixture than it would be to a pure cyan light at 485 nm that has the same intensity as the mixture of blue and green.
|
86 |
+
|
87 |
+
Because of this, and because the primaries in color printing systems generally are not pure themselves, the colors reproduced are never perfectly saturated spectral colors, and so spectral colors cannot be matched exactly. However, natural scenes rarely contain fully saturated colors, thus such scenes can usually be approximated well by these systems. The range of colors that can be reproduced with a given color reproduction system is called the gamut. The CIE chromaticity diagram can be used to describe the gamut.
|
88 |
+
|
89 |
+
Another problem with color reproduction systems is connected with the acquisition devices, like cameras or scanners. The characteristics of the color sensors in the devices are often very far from the characteristics of the receptors in the human eye. In effect, acquisition of colors can be relatively poor if they have special, often very "jagged", spectra caused for example by unusual lighting of the photographed scene.
|
90 |
+
A color reproduction system "tuned" to a human with normal color vision may give very inaccurate results for other observers.
|
91 |
+
|
92 |
+
The different color response of different devices can be problematic if not properly managed. For color information stored and transferred in digital form, color management techniques, such as those based on ICC profiles, can help to avoid distortions of the reproduced colors. Color management does not circumvent the gamut limitations of particular output devices, but can assist in finding good mapping of input colors into the gamut that can be reproduced.
|
93 |
+
|
94 |
+
Additive color is light created by mixing together light of two or more different colors. Red, green, and blue are the additive primary colors normally used in additive color systems such as projectors and computer terminals.
|
95 |
+
|
96 |
+
Subtractive coloring uses dyes, inks, pigments, or filters to absorb some wavelengths of light and not others. The color that a surface displays comes from the parts of the visible spectrum that are not absorbed and therefore remain visible. Without pigments or dye, fabric fibers, paint base and paper are usually made of particles that scatter white light (all colors) well in all directions. When a pigment or ink is added, wavelengths are absorbed or "subtracted" from white light, so light of another color reaches the eye.
|
97 |
+
|
98 |
+
If the light is not a pure white source (the case of nearly all forms of artificial lighting), the resulting spectrum will appear a slightly different color. Red paint, viewed under blue light, may appear black. Red paint is red because it scatters only the red components of the spectrum. If red paint is illuminated by blue light, it will be absorbed by the red paint, creating the appearance of a black object.
|
99 |
+
|
100 |
+
Structural colors are colors caused by interference effects rather than by pigments. Color effects are produced when a material is scored with fine parallel lines, formed of one or more parallel thin layers, or otherwise composed of microstructures on the scale of the color's wavelength. If the microstructures are spaced randomly, light of shorter wavelengths will be scattered preferentially to produce Tyndall effect colors: the blue of the sky (Rayleigh scattering, caused by structures much smaller than the wavelength of light, in this case air molecules), the luster of opals, and the blue of human irises. If the microstructures are aligned in arrays, for example the array of pits in a CD, they behave as a diffraction grating: the grating reflects different wavelengths in different directions due to interference phenomena, separating mixed "white" light into light of different wavelengths. If the structure is one or more thin layers then it will reflect some wavelengths and transmit others, depending on the layers' thickness.
|
101 |
+
|
102 |
+
Structural color is studied in the field of thin-film optics. The most ordered or the most changeable structural colors are iridescent. Structural color is responsible for the blues and greens of the feathers of many birds (the blue jay, for example), as well as certain butterfly wings and beetle shells. Variations in the pattern's spacing often give rise to an iridescent effect, as seen in peacock feathers, soap bubbles, films of oil, and mother of pearl, because the reflected color depends upon the viewing angle. Numerous scientists have carried out research in butterfly wings and beetle shells, including Isaac Newton and Robert Hooke. Since 1942, electron micrography has been used, advancing the development of products that exploit structural color, such as "photonic" cosmetics.[20]
|
en/4713.html.txt
ADDED
@@ -0,0 +1,252 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
In geometry, a polygon (/ˈpɒlɪɡɒn/) is a plane figure that is described by a finite number of straight line segments connected to form a closed polygonal chain or polygonal circuit. The solid plane region, the bounding circuit, or the two together, may be called a polygon.
|
4 |
+
|
5 |
+
The segments of a polygonal circuit are called its edges or sides, and the points where two edges meet are the polygon's vertices (singular: vertex) or corners. The interior of a solid polygon is sometimes called its body. An n-gon is a polygon with n sides; for example, a triangle is a 3-gon.
|
6 |
+
|
7 |
+
A simple polygon is one which does not intersect itself. Mathematicians are often concerned only with the bounding polygonal chains of simple polygons and they often define a polygon accordingly. A polygonal boundary may be allowed to cross over itself, creating star polygons and other self-intersecting polygons.
|
8 |
+
|
9 |
+
A polygon is a 2-dimensional example of the more general polytope in any number of dimensions. There are many more generalizations of polygons defined for different purposes.
|
10 |
+
|
11 |
+
The word polygon derives from the Greek adjective πολύς (polús) "much", "many" and γωνία (gōnía) "corner" or "angle". It has been suggested that γόνυ (gónu) "knee" may be the origin of gon.[1]
|
12 |
+
|
13 |
+
Polygons are primarily classified by the number of sides. See the table below.
|
14 |
+
|
15 |
+
Polygons may be characterized by their convexity or type of non-convexity:
|
16 |
+
|
17 |
+
Euclidean geometry is assumed throughout.
|
18 |
+
|
19 |
+
Any polygon has as many corners as it has sides. Each corner has several angles. The two most important ones are:
|
20 |
+
|
21 |
+
In this section, the vertices of the polygon under consideration are taken to be
|
22 |
+
|
23 |
+
|
24 |
+
|
25 |
+
(
|
26 |
+
|
27 |
+
x
|
28 |
+
|
29 |
+
0
|
30 |
+
|
31 |
+
|
32 |
+
,
|
33 |
+
|
34 |
+
y
|
35 |
+
|
36 |
+
0
|
37 |
+
|
38 |
+
|
39 |
+
)
|
40 |
+
,
|
41 |
+
(
|
42 |
+
|
43 |
+
x
|
44 |
+
|
45 |
+
1
|
46 |
+
|
47 |
+
|
48 |
+
,
|
49 |
+
|
50 |
+
y
|
51 |
+
|
52 |
+
1
|
53 |
+
|
54 |
+
|
55 |
+
)
|
56 |
+
,
|
57 |
+
…
|
58 |
+
,
|
59 |
+
(
|
60 |
+
|
61 |
+
x
|
62 |
+
|
63 |
+
n
|
64 |
+
−
|
65 |
+
1
|
66 |
+
|
67 |
+
|
68 |
+
,
|
69 |
+
|
70 |
+
y
|
71 |
+
|
72 |
+
n
|
73 |
+
−
|
74 |
+
1
|
75 |
+
|
76 |
+
|
77 |
+
)
|
78 |
+
|
79 |
+
|
80 |
+
{\displaystyle (x_{0},y_{0}),(x_{1},y_{1}),\ldots ,(x_{n-1},y_{n-1})}
|
81 |
+
|
82 |
+
in order. For convenience in some formulas, the notation (xn, yn) = (x0, y0) will also be used.
|
83 |
+
|
84 |
+
If the polygon is non-self-intersecting (that is, simple), the signed area is
|
85 |
+
|
86 |
+
or, using determinants
|
87 |
+
|
88 |
+
where
|
89 |
+
|
90 |
+
|
91 |
+
|
92 |
+
|
93 |
+
Q
|
94 |
+
|
95 |
+
i
|
96 |
+
,
|
97 |
+
j
|
98 |
+
|
99 |
+
|
100 |
+
|
101 |
+
|
102 |
+
{\displaystyle Q_{i,j}}
|
103 |
+
|
104 |
+
is the squared distance between
|
105 |
+
|
106 |
+
|
107 |
+
|
108 |
+
(
|
109 |
+
|
110 |
+
x
|
111 |
+
|
112 |
+
i
|
113 |
+
|
114 |
+
|
115 |
+
,
|
116 |
+
|
117 |
+
y
|
118 |
+
|
119 |
+
i
|
120 |
+
|
121 |
+
|
122 |
+
)
|
123 |
+
|
124 |
+
|
125 |
+
{\displaystyle (x_{i},y_{i})}
|
126 |
+
|
127 |
+
and
|
128 |
+
|
129 |
+
|
130 |
+
|
131 |
+
(
|
132 |
+
|
133 |
+
x
|
134 |
+
|
135 |
+
j
|
136 |
+
|
137 |
+
|
138 |
+
,
|
139 |
+
|
140 |
+
y
|
141 |
+
|
142 |
+
j
|
143 |
+
|
144 |
+
|
145 |
+
)
|
146 |
+
.
|
147 |
+
|
148 |
+
|
149 |
+
{\displaystyle (x_{j},y_{j}).}
|
150 |
+
|
151 |
+
[3][4]
|
152 |
+
|
153 |
+
The signed area depends on the ordering of the vertices and of the orientation of the plane. Commonly, the positive orientation is defined by the (counterclockwise) rotation that maps the positive x-axis to the positive y-axis. If the vertices are ordered counterclockwise (that is, according to positive orientation), the signed area is positive; otherwise, it is negative. In either case, the area formula is correct in absolute value. This is commonly called the shoelace formula or Surveyor's formula.[5]
|
154 |
+
|
155 |
+
The area A of a simple polygon can also be computed if the lengths of the sides, a1, a2, ..., an and the exterior angles, θ1, θ2, ..., θn are known, from:
|
156 |
+
|
157 |
+
The formula was described by Lopshits in 1963.[6]
|
158 |
+
|
159 |
+
If the polygon can be drawn on an equally spaced grid such that all its vertices are grid points, Pick's theorem gives a simple formula for the polygon's area based on the numbers of interior and boundary grid points: the former number plus one-half the latter number, minus 1.
|
160 |
+
|
161 |
+
In every polygon with perimeter p and area A , the isoperimetric inequality
|
162 |
+
|
163 |
+
|
164 |
+
|
165 |
+
|
166 |
+
p
|
167 |
+
|
168 |
+
2
|
169 |
+
|
170 |
+
|
171 |
+
>
|
172 |
+
4
|
173 |
+
π
|
174 |
+
A
|
175 |
+
|
176 |
+
|
177 |
+
{\displaystyle p^{2}>4\pi A}
|
178 |
+
|
179 |
+
holds.[7]
|
180 |
+
|
181 |
+
For any two simple polygons of equal area, the Bolyai–Gerwien theorem asserts that the first can be cut into polygonal pieces which can be reassembled to form the second polygon.
|
182 |
+
|
183 |
+
The lengths of the sides of a polygon do not in general determine its area.[8] However, if the polygon is cyclic then the sides do determine the area.[9] Of all n-gons with given side lengths, the one with the largest area is cyclic. Of all n-gons with a given perimeter, the one with the largest area is regular (and therefore cyclic).[10]
|
184 |
+
|
185 |
+
Many specialized formulas apply to the areas of regular polygons.
|
186 |
+
|
187 |
+
The area of a regular polygon is given in terms of the radius r of its inscribed circle and its perimeter p by
|
188 |
+
|
189 |
+
This radius is also termed its apothem and is often represented as a.
|
190 |
+
|
191 |
+
The area of a regular n-gon with side s inscribed in a unit circle is
|
192 |
+
|
193 |
+
The area of a regular n-gon in terms of the radius R of its circumscribed circle and its perimeter p is given by
|
194 |
+
|
195 |
+
The area of a regular n-gon inscribed in a unit-radius circle, with side s and interior angle
|
196 |
+
|
197 |
+
|
198 |
+
|
199 |
+
α
|
200 |
+
,
|
201 |
+
|
202 |
+
|
203 |
+
{\displaystyle \alpha ,}
|
204 |
+
|
205 |
+
can also be expressed trigonometrically as
|
206 |
+
|
207 |
+
The area of a self-intersecting polygon can be defined in two different ways, giving different answers:
|
208 |
+
|
209 |
+
Using the same convention for vertex coordinates as in the previous section, the coordinates of the centroid of a solid simple polygon are
|
210 |
+
|
211 |
+
In these formulas, the signed value of area
|
212 |
+
|
213 |
+
|
214 |
+
|
215 |
+
A
|
216 |
+
|
217 |
+
|
218 |
+
{\displaystyle A}
|
219 |
+
|
220 |
+
must be used.
|
221 |
+
|
222 |
+
For triangles (n = 3), the centroids of the vertices and of the solid shape are the same, but, in general, this is not true for n > 3. The centroid of the vertex set of a polygon with n vertices has the coordinates
|
223 |
+
|
224 |
+
The idea of a polygon has been generalized in various ways. Some of the more important include:
|
225 |
+
|
226 |
+
The word polygon comes from Late Latin polygōnum (a noun), from Greek πολύγωνον (polygōnon/polugōnon), noun use of neuter of πολύγωνος (polygōnos/polugōnos, the masculine adjective), meaning "many-angled". Individual polygons are named (and sometimes classified) according to the number of sides, combining a Greek-derived numerical prefix with the suffix -gon, e.g. pentagon, dodecagon. The triangle, quadrilateral and nonagon are exceptions.
|
227 |
+
|
228 |
+
Beyond decagons (10-sided) and dodecagons (12-sided), mathematicians generally use numerical notation, for example 17-gon and 257-gon.[14]
|
229 |
+
|
230 |
+
Exceptions exist for side counts that are more easily expressed in verbal form (e.g. 20 and 30), or are used by non-mathematicians. Some special polygons also have their own names; for example the regular star pentagon is also known as the pentagram.
|
231 |
+
|
232 |
+
To construct the name of a polygon with more than 20 and less than 100 edges, combine the prefixes as follows.[18] The "kai" term applies to 13-gons and higher and was used by Kepler, and advocated by John H. Conway for clarity to concatenated prefix numbers in the naming of quasiregular polyhedra.[20]
|
233 |
+
|
234 |
+
Polygons have been known since ancient times. The regular polygons were known to the ancient Greeks, with the pentagram, a non-convex regular polygon (star polygon), appearing as early as the 7th century B.C. on a krater by Aristophanes, found at Caere and now in the Capitoline Museum.[35][36]
|
235 |
+
|
236 |
+
The first known systematic study of non-convex polygons in general was made by Thomas Bradwardine in the 14th century.[37]
|
237 |
+
|
238 |
+
In 1952, Geoffrey Colin Shephard generalized the idea of polygons to the complex plane, where each real dimension is accompanied by an imaginary one, to create complex polygons.[38]
|
239 |
+
|
240 |
+
Polygons appear in rock formations, most commonly as the flat facets of crystals, where the angles between the sides depend on the type of mineral from which the crystal is made.
|
241 |
+
|
242 |
+
Regular hexagons can occur when the cooling of lava forms areas of tightly packed columns of basalt, which may be seen at the Giant's Causeway in Northern Ireland, or at the Devil's Postpile in California.
|
243 |
+
|
244 |
+
In biology, the surface of the wax honeycomb made by bees is an array of hexagons, and the sides and base of each cell are also polygons.
|
245 |
+
|
246 |
+
In computer graphics, a polygon is a primitive used in modelling and rendering. They are defined in a database, containing arrays of vertices (the coordinates of the geometrical vertices, as well as other attributes of the polygon, such as color, shading and texture), connectivity information, and materials.[39][40]
|
247 |
+
|
248 |
+
Any surface is modelled as a tessellation called polygon mesh. If a square mesh has n + 1 points (vertices) per side, there are n squared squares in the mesh, or 2n squared triangles since there are two triangles in a square. There are (n + 1)2 / 2(n2) vertices per triangle. Where n is large, this approaches one half. Or, each vertex inside the square mesh connects four edges (lines).
|
249 |
+
|
250 |
+
The imaging system calls up the structure of polygons needed for the scene to be created from the database. This is transferred to active memory and finally, to the display system (screen, TV monitors etc.) so that the scene can be viewed. During this process, the imaging system renders polygons in correct perspective ready for transmission of the processed data to the display system. Although polygons are two-dimensional, through the system computer they are placed in a visual scene in the correct three-dimensional orientation.
|
251 |
+
|
252 |
+
In computer graphics and computational geometry, it is often necessary to determine whether a given point P = (x0,y0) lies inside a simple polygon given by a sequence of line segments. This is called the point in polygon test.[41]
|
en/4714.html.txt
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
Other reasons this message may be displayed:
|
en/4715.html.txt
ADDED
@@ -0,0 +1,113 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Polytheism is the worship of or belief in multiple deities, which are usually assembled into a pantheon of gods and goddesses, along with their own religions and rituals. In most religions which accept polytheism, the different gods and goddesses are representations of forces of nature or ancestral principles, and can be viewed either as autonomous or as aspects or emanations of a creator deity or transcendental absolute principle (monistic theologies), which manifests immanently in nature (panentheistic and pantheistic theologies).[1] Most of the polytheistic deities of ancient religions, with the notable exceptions of the Ancient Egyptian[2] and Hindu deities, were conceived as having physical bodies.
|
2 |
+
|
3 |
+
Polytheism is a type of theism. Within theism, it contrasts with monotheism, the belief in a singular God, in most cases transcendent. Polytheists do not always worship all the gods equally, but they can be henotheists, specializing in the worship of one particular deity. Other polytheists can be kathenotheists, worshiping different deities at different times.
|
4 |
+
|
5 |
+
Polytheism was the typical form of religion during the Bronze Age and Iron Age up to the Axial Age and the development of Abrahamic religions, the latter of which enforced strict monotheism. It is well documented in historical religions of Classical antiquity, especially ancient Greek religion and ancient Roman religion, and after the decline of Greco-Roman polytheism in tribal religions such as Germanic, Slavic and Baltic paganism.
|
6 |
+
|
7 |
+
Notable polytheistic religions practiced today include Taoism, Shenism or Chinese folk religion, Japanese Shinto, Santería, most Traditional African religions[3] and various neopagan faiths.
|
8 |
+
|
9 |
+
Hinduism denies being exclusively either monotheistic or polytheistic, sometimes believing to be monotheistic but declining to polytheism with many Hindu schools regarding it as henotheistic. The Vedanta philosophy of Hinduism secludes the idea of Hinduism being monotheistic along with the belief that Brahman is the cause of everything and the universe itself being the manifestation of Brahman.
|
10 |
+
|
11 |
+
The term comes from the Greek πολύ poly ("many") and θεός theos ("god") and was first invented by the Jewish writer Philo of Alexandria to argue with the Greeks. When Christianity spread throughout Europe and the Mediterranean, non-Christians were just called Gentiles (a term originally used by Jews to refer to non-Jews) or pagans (locals) or by the clearly pejorative term idolaters (worshiping "false" gods). The modern usage of the term is first revived in French through Jean Bodin in 1580, followed by Samuel Purchas's usage in English in 1614.[4]
|
12 |
+
|
13 |
+
A central, main division in modern polytheistic practices is between soft polytheism and hard polytheism.[5][6]
|
14 |
+
|
15 |
+
"Hard" polytheism is the belief that gods are distinct, separate, real divine beings, rather than psychological archetypes or personifications of natural forces. Hard polytheists reject the idea that "all gods are one god." "Hard" polytheists do not necessarily consider the gods of all cultures as being equally real, a theological position formally known as integrational polytheism or omnism. For hard polytheists, gods are individual and not only different names for the same being.[6]
|
16 |
+
|
17 |
+
This is often contrasted with "soft" polytheism, which holds that different gods may be aspects of only one god, that the pantheons of other cultures are representative of one single pantheon, psychological archetypes or personifications of natural forces.[7] In this way, gods may be interchangeable for one another across cultures.[6]
|
18 |
+
|
19 |
+
The deities of polytheism are often portrayed as complex personages of greater or lesser status, with individual skills, needs, desires and histories; in many ways similar to humans (anthropomorphic) in their personality traits, but with additional individual powers, abilities, knowledge or perceptions.
|
20 |
+
Polytheism cannot be cleanly separated from the animist beliefs prevalent in most folk religions. The gods of polytheism are in many cases the highest order of a continuum of supernatural beings or spirits, which may include ancestors, demons, wights and others. In some cases these spirits are divided into celestial or chthonic classes, and belief in the existence of all these beings does not imply that all are worshipped.
|
21 |
+
|
22 |
+
Types of deities often found in polytheism may include
|
23 |
+
|
24 |
+
In the Classical era, Sallustius (4th century AD) categorised mythology into five types:
|
25 |
+
|
26 |
+
The theological are those myths which use no bodily form but contemplate the very essence of the gods: e.g., Cronus swallowing his children. Since divinity is intellectual, and all intellect returns into itself, this myth expresses in allegory the essence of divinity.
|
27 |
+
|
28 |
+
Myths may be regarded physically when they express the activities of gods in the world.
|
29 |
+
|
30 |
+
The psychological way is to regard (myths as allegories of) the activities of the soul itself and or the soul's acts of thought.
|
31 |
+
|
32 |
+
The material is to regard material objects to actually be gods, for example: to call the earth Gaia, ocean Okeanos, or heat Typhon.
|
33 |
+
|
34 |
+
Some well-known historical polytheistic pantheons include the Sumerian gods and the Egyptian gods, and the classical-attested pantheon which includes the ancient Greek religion and Roman religion. Post-classical polytheistic religions include Norse Æsir and Vanir, the Yoruba Orisha, the Aztec gods, and many others. Today, most historical polytheistic religions are referred to as "mythology",[8] though the stories cultures tell about their gods should be distinguished from their worship or religious practice. For instance deities portrayed in conflict in mythology would still be worshipped sometimes in the same temple side by side, illustrating the distinction in the devotees mind between the myth and the reality. Scholars such as Jaan Puhvel, J. P. Mallory, and Douglas Q. Adams have reconstructed aspects of the ancient Proto-Indo-European religion, from which the religions of the various Indo-European peoples derive, and that this religion was an essentially naturalist numenistic religion. An example of a religious notion from this shared past is the concept of *dyēus, which is attested in several distinct religious systems.
|
35 |
+
|
36 |
+
In many civilizations, pantheons tended to grow over time. Deities first worshipped as the patrons of cities or places came to be collected together as empires extended over larger territories. Conquests could lead to the subordination of the elder culture's pantheon to a newer one, as in the Greek Titanomachia, and possibly also the case of the Æsir and Vanir in the Norse mythos. Cultural exchange could lead to "the same" deity being renowned in two places under different names, as seen with the Greeks, Etruscans, and Romans, and also to the cultural transmission of elements of an extraneous religion into a local cult, as with worship of the ancient Egyptian deity Osiris, which was later followed in ancient Greece.
|
37 |
+
|
38 |
+
Most ancient belief systems held that gods influenced human lives. However, the Greek philosopher Epicurus held that the gods were living, incorruptible, blissful beings who did not trouble themselves with the affairs of mortals, but who could be perceived by the mind, especially during sleep. Epicurus believed that these gods were material, human-like, and that they inhabited the empty spaces between worlds.
|
39 |
+
|
40 |
+
Hellenistic religion may still be regarded as polytheistic, but with strong monistic components, and monotheism finally emerges from Hellenistic traditions in Late Antiquity in the form of Neoplatonism and Christian theology.
|
41 |
+
|
42 |
+
The classical scheme in Ancient Greece of the Twelve Olympians (the Canonical Twelve of art and poetry) were:[10][11] Zeus, Hera, Poseidon, Athena, Ares, Demeter, Apollo, Artemis, Hephaestus, Aphrodite, Hermes, and Hestia. Though it is suggested that Hestia stepped down when Dionysus was invited to Mount Olympus, this is a matter of controversy. Robert Graves' The Greek Myths cites two sources[12][13] that obviously do not suggest Hestia surrendered her seat, though he suggests she did. Hades[14] was often excluded because he dwelt in the underworld. All of the gods had a power. There was, however, a great deal of fluidity as to whom was counted among their number in antiquity.[15] Different cities often worshipped the same deities, sometimes with epithets that distinguished them and specified their local nature.
|
43 |
+
|
44 |
+
The Hellenic Polytheism extended beyond mainland Greece, to the islands and coasts of Ionia in Asia Minor, to Magna Graecia (Sicily and southern Italy), and to scattered Greek colonies in the Western Mediterranean, such as Massalia (Marseille). Greek religion tempered Etruscan cult and belief to form much of the later Roman religion.
|
45 |
+
|
46 |
+
The animistic nature of folk beliefs is an anthropological cultural universal. The belief in ghosts and spirits animating the natural world and the practice of ancestor worship is universally present in the world's cultures and re-emerges in monotheistic or materialistic societies as "superstition", belief in demons, tutelary saints, fairies or extraterrestrials.
|
47 |
+
|
48 |
+
The presence of a full polytheistic religion, complete with a ritual cult conducted by a priestly caste, requires a higher level of organization and is not present in every culture. In Eurasia, the Kalash are one of very few instances of surviving polytheism. Also, a large number of polytheistic folk traditions are subsumed in contemporary Hinduism, although Hinduism is doctrinally dominated by monist or monotheist theology (Bhakti, Advaita). Historical Vedic polytheist ritualism survives as a minor current in Hinduism, known as Shrauta. More widespread is folk Hinduism, with rituals dedicated to various local or regional deities.
|
49 |
+
|
50 |
+
In Buddhism, there are higher beings commonly designed (or designated) as gods, Devas; however, Buddhism, at its core (the original Pali canon), does not teach the notion of praying nor worship to the Devas or any god(s).
|
51 |
+
|
52 |
+
However, in Buddhism, the core leader 'Buddha', who pioneered the path to enlightenment is not worshiped in meditation, but simply reflected upon. Statues or images of the Buddha (Buddharupas) are worshiped in front of to reflect and contemplate on qualities that the particular position of that rupa represents. In Buddhism, there is no creator and the Buddha rejected the idea that a permanent, personal, fixed, omniscient deity can exist, linking into the core concept of impermanence (anicca).
|
53 |
+
|
54 |
+
Devas, in general, are beings who have had more positive karma in their past lives than humans. Their lifespan eventually ends. When their lives end, they will be reborn as devas or as other beings. When they accumulate negative karma, they are reborn as either human or any of the other lower beings. Humans and other beings could also be reborn as a deva in their next rebirth, if they accumulate enough positive karma; however, it is not recommended.
|
55 |
+
|
56 |
+
Buddhism flourished in different countries, and some of those countries have polytheistic folk religions. Buddhism syncretizes easily with other religions. Thus, Buddhism has mixed with the folk religions and emerged in polytheistic variants (such as Vajrayana) as well as non-theistic variants. For example, in Japan, Buddhism, mixed with Shinto, which worships deities called kami, created a tradition which prays to the deities of Shinto as forms of Buddhas. Thus, there may be elements of worship of gods in some forms of later Buddhism.
|
57 |
+
|
58 |
+
The concepts of Adi-Buddha and Dharmakaya are the closest to monotheism any form of Buddhism comes, all famous sages and Bodhisattvas being regarded as reflections of it.[clarification needed]
|
59 |
+
Adi-Buddha is not said to be the creator, but the originator of all things, being a deity in an Emanationist sense.
|
60 |
+
|
61 |
+
Although Christianity is officially considered a monotheistic religion,[16][17] it is sometimes claimed that Christianity is not truly monotheistic because of its teaching about the Trinity,[18] which believes in a God revealed in three different persons, namely the Father, the Son and the Holy Spirit. This is the position of some Jews and Muslims who contend that because of the adoption of a Triune conception of deity, Christianity is actually a form of Tritheism or Polytheism,[19][20] for example see Shituf or Tawhid. However, the central doctrine of Christianity is that "one God exists in Three Persons and One Substance".[21] Strictly speaking, the doctrine is a revealed mystery which while above reason is not contrary to it.[clarification needed][21] The word 'person' is an imperfect translation of the original term "hypostasis". In everyday speech "person" denotes a separate rational and moral individual, possessed of self-consciousness, and aware of individual identity despite changes. A human person is a distinct individual essence in whom human nature is individualized. But in God there are no three individuals alongside of, and separate from, one another, but only personal self distinctions[clarification needed] within the divine essence, which is not only generically[clarification needed], but also numerically, one.[22] Although the doctrine of the Trinity was not definitely formulated before the First Council of Constantinople in 381, the doctrine of one God, inherited from Judaism was always the indubitable premise of the Church's faith.[23]
|
62 |
+
|
63 |
+
Jordan Paper, a Western scholar and self-described polytheist, considers polytheism to be the normal state in human culture. He argues that "Even the Catholic Church shows polytheistic aspects with the 'worshipping' of the saints." On the other hand, he complains, monotheistic missionaries and scholars were eager to see a proto-monotheism or at least henotheism in polytheistic religions, for example, when taking from the Chinese pair of Sky and Earth only one part and calling it the King of Heaven, as Matteo Ricci did.[24]
|
64 |
+
|
65 |
+
Joseph Smith, the founder of the Latter Day Saint movement, believed in "the plurality of Gods", saying "I have always declared God to be a distinct personage, Jesus Christ a separate and distinct personage from God the Father, and that the Holy Ghost was a distinct personage and a Spirit: and these three constitute three distinct personages and three Gods".[25] Mormonism also affirms the existence of a Heavenly Mother,[26] as well as exaltation, the idea that people can become like god in the afterlife,[27] and the prevailing view among Mormons is that God the Father was once a man who lived on a planet with his own higher God, and who became perfect after following this higher God.[28][29] Some critics of Mormonism argue that statements in the Book of Mormon describe a trinitarian conception of God (e.g. ‹The template LDS is being considered for deletion.› 2 Nephi 31:21; ‹The template LDS is being considered for deletion.› Alma 11:44), but were superseded by later revelations.[30]
|
66 |
+
|
67 |
+
Mormons teach that scriptural statements on the unity of the Father, the Son, and the Holy Ghost represent a oneness of purpose, not of substance.[31] They believe that the early Christian church did not characterize divinity in terms of an immaterial, formless shared substance until post-apostolic theologians began to incorporate Greek metaphysical philosophies (such as Neoplatonism) into Christian doctrine.[32][33] Mormons believe that the truth about God's nature was restored through modern day revelation, which reinstated the original Judeo-Christian concept of a natural, corporeal, immortal God,[34] who is the literal Father of the spirits of humans.[35] It is to this personage alone that Mormons pray, as He is and always will be their Heavenly Father, the supreme "God of gods" (Deuteronomy 10:17). In the sense that Mormons worship only God the Father, they consider themselves monotheists. Nevertheless, Mormons adhere to Christ's teaching that those who receive God's word can obtain the title of "gods" (John 10:33–36), because as literal children of God they can take upon themselves His divine attributes.[36] Mormons teach that "The glory of God is intelligence" (Doctrine and Covenants 93:36), and that it is by sharing the Father's perfect comprehension of all things that both Jesus Christ and the Holy Spirit are also divine.[37]
|
68 |
+
|
69 |
+
Hinduism is not a monolithic religion: many extremely varied religious traditions and practices are grouped together under this umbrella term and some modern scholars have questioned the legitimacy of unifying them artificially and suggest that one should speak of "Hinduisms" in the plural.[38] Theistic Hinduism encompasses both monotheistic and polytheistic tendencies and variations on or mixes of both structures.
|
70 |
+
|
71 |
+
Hindus venerate deities in the form of the murti, or idol. The Puja (worship) of the murti is like a way to communicate with the formless, abstract divinity (Brahman in Hinduism) which creates, sustains and dissolves creation. However, there are sects who have advocated that there is no need of giving a shape to God and it is omnipresent and beyond the things which human can see or feel tangibly. Specially the Arya Samaj founded by Swami Dayananda Saraswati and Brahmo Samaj founder by Ram Mohan Roy (there are others also) do not worship deities. Arya Samaj favours Vedic chants and Havan, while Brahmo Samaj stresses simple prayers.[citation needed]
|
72 |
+
|
73 |
+
Some Hindu philosophers and theologians argue for a transcendent metaphysical structure with a single divine essence.[citation needed] This divine essence is usually referred to as Brahman or Atman, but the understanding of the nature of this absolute divine essence is the line which defines many Hindu philosophical traditions such as Vedanta.
|
74 |
+
|
75 |
+
Among lay Hindus, some believe in different deities emanating from Brahman, while others practice more traditional polytheism and henotheism, focusing their worship on one or more personal deities, while granting the existence of others.
|
76 |
+
|
77 |
+
Academically speaking, the ancient Vedic scriptures, upon which Hinduism is derived, describe four authorized disciplic lines of teaching coming down over thousands of years. (Padma Purana). Four of them propound that the Absolute Truth is Fully Personal, as in Judeo-Christian theology. That the Primal Original God is Personal, both transcendent and immanent throughout creation. He can be, and is often approached through worship of Murtis, called "Archa-Vigraha", which are described in the Vedas as likenesses of His various dynamic, spiritual Forms. This is the Vaisnava theology.
|
78 |
+
|
79 |
+
The fifth disciplic line of Vedic spirituality, founded by Adi Shankaracharya, promotes the concept that the Absolute is Brahman, without clear differentiations, without will, without thought, without intelligence.
|
80 |
+
|
81 |
+
In the Smarta denomination of Hinduism, the philosophy of Advaita expounded by Shankara allows veneration of numerous deities[citation needed] with the understanding that all of them are but manifestations of one impersonal divine power, Brahman. Therefore, according to various schools of Vedanta including Shankara, which is the most influential and important Hindu theological tradition, there are a great number of deities in Hinduism, such as Vishnu, Shiva, Ganesha, Hanuman, Lakshmi, and Kali, but they are essentially different forms of the same "Being".[citation needed] However, many Vedantic philosophers also argue that all individuals were united by the same impersonal, divine power in the form of the Atman.
|
82 |
+
|
83 |
+
Many other Hindus, however, view polytheism as far preferable to monotheism. Ram Swarup, for example, points to the Vedas as being specifically polytheistic,[39] and states that, "only some form of polytheism alone can do justice to this variety and richness."[40] Sita Ram Goel, another 20th-century Hindu historian, wrote:
|
84 |
+
|
85 |
+
"I had an occasion to read the typescript of a book [Ram Swarup] had finished writing in 1973. It was a profound study of Monotheism, the central dogma of both Islam and Christianity, as well as a powerful presentation of what the monotheists denounce as Hindu Polytheism. I had never read anything like it. It was a revelation to me that Monotheism was not a religious concept but an imperialist idea. I must confess that I myself had been inclined towards Monotheism till this time. I had never thought that a multiplicity of Gods was the natural and spontaneous expression of an evolved consciousness."[41]
|
86 |
+
|
87 |
+
Some Hindus construe this notion of polytheism in the sense of polymorphism—one God with many forms or names. The Rig Veda, the primary Hindu scripture, elucidates this as follows:
|
88 |
+
|
89 |
+
They call him Indra, Mitra, Varuna, Agni, and he is heavenly nobly-winged Garutman. To what is One, sages give many a title they call it Agni, Yama, Matarisvan. Book I, Hymn 164, Verse 46 Rigveda[42]
|
90 |
+
|
91 |
+
Neopaganism, also known as modern paganism and contemporary paganism,[43] is a group of contemporary religious movements influenced by or claiming to be derived from the various historical pagan beliefs of pre-modern Europe.[44][45] Although they do share commonalities, contemporary Pagan religious movements are diverse and no single set of beliefs, practices, or texts are shared by them all.[46]
|
92 |
+
|
93 |
+
English occultist Dion Fortune was a major populiser of soft polytheism. In her novel, The Sea Priestess, she wrote, "All gods are one god, and all goddesses are one goddess, and there is one initiator."[47]
|
94 |
+
|
95 |
+
Reconstructionist polytheists apply scholarly disciplines such as history, archaeology and language study to revive ancient, traditional religions that have been fragmented, damaged or even destroyed, such as Norse Paganism, Greek Paganism, Celtic polytheism and others. A reconstructionist endeavours to revive and reconstruct an authentic practice, based on the ways of the ancestors but workable in contemporary life. These polytheists sharply differ from neopagans in that they consider their religion not only inspired by the religions of antiquity but often as an actual continuation or revival of those religions.[48][self-published source?]
|
96 |
+
|
97 |
+
Wicca is a duotheistic faith created by Gerald Gardner that allows for polytheism.[49][50][51] Wiccans specifically worship the Lord and Lady of the Isles (their names are oathbound).[50][51][52][53] It is an orthopraxic mystery religion that requires initiation to the priesthood in order to consider oneself Wiccan.[50][51][54] Wicca emphasizes duality and the cycle of nature.[50][51][55]
|
98 |
+
|
99 |
+
In Africa, polytheism in Serer religion dates as far back to the Neolithic Era (possibly earlier) when the ancient ancestors of the Serer people represented their Pangool on the Tassili n'Ajjer.[9] The supreme creator deity in Serer religion is Roog. However, there are many deities[56] and Pangool (singular : Fangool, the interceders with the divine) in Serer religion.[9] Each one has its own purpose and serves as Roog's agent on Earth.[56] Amongst the Cangin speakers, a sub-group of the Serers, Roog is known as Koox.[57]
|
100 |
+
|
101 |
+
The term 'polytheist' is sometimes used by Sunni Muslim extremist groups such as Islamic State of Iraq and the Levant (ISIL) as a derogatory reference to Shiite Muslims, whom they view as having "strayed from Islam’s monotheistic creed because of the reverence they show for historical figures, like Imam Ali".[58]
|
102 |
+
|
103 |
+
Polydeism (from the Greek πολύ poly ("many") and Latin deus meaning god) is a portmanteau referencing a polytheistic form of deism, encompassing the belief that the universe was the collective creation of multiple gods, each of whom created a piece of the universe or multiverse and then ceased to intervene in its evolution. This concept addresses an apparent contradiction in deism, that a monotheistic God created the universe, but now expresses no apparent interest in it, by supposing that if the universe is the construct of many gods, none of them would have an interest in the universe as a whole.
|
104 |
+
|
105 |
+
Creighton University Philosophy professor William O. Stephens,[59] who has taught this concept, suggests that C. D. Broad projected this concept[60] in Broad's 1925 article, "The Validity of Belief in a Personal God".[61] Broad noted that the arguments for the existence of God only tend to prove that "a designing mind had existed in the past, not that it does exist now. It is quite compatible with this argument that God should have died long ago, or that he should have turned his attention to other parts of the Universe", and notes in the same breath that "there is nothing in the facts to suggest that there is only one such being".[62] Stephens contends that Broad, in turn, derived the concept from David Hume. Stephens states:
|
106 |
+
|
107 |
+
David Hume's criticisms of the argument from design include the argument that, for all we know, a committee of very powerful, but not omnipotent, divine beings could have collaborated in creating the world, but then afterwards left it alone or even ceased to exist. This would be polydeism.
|
108 |
+
|
109 |
+
This use of the term appears to originate at least as early as Robert M. Bowman Jr.'s 1997 essay, Apologetics from Genesis to Revelation.[63] Bowman wrote:
|
110 |
+
|
111 |
+
Materialism (illustrated by the Epicureans), represented today by atheism, skepticism, and deism. The materialist may acknowledge superior beings, but they do not believe in a Supreme Being. Epicureanism was founded about 300 BC by Epicurus. Their world view might be called "polydeism:" there are many gods, but they are merely superhuman beings; they are remote, uninvolved in the world, posing no threat and offering no hope to human beings. Epicureans regarded traditional religion and idolatry as harmless enough as long as the gods were not feared or expected to do or say anything.
|
112 |
+
|
113 |
+
Sociologist Susan Starr Sered used the term in her 1994 book, Priestess, Mother, Sacred Sister: Religions Dominated by Women, which includes a chapter titled, "No Father in Heaven: Androgyny and Polydeism". Sered states therein that she has "chosen to gloss on 'polydeism' a range of beliefs in more than one supernatural entity."[64] Sered used this term in a way that would encompass polytheism, rather than exclude much of it, as she intended to capture both polytheistic systems and nontheistic systems that assert the influence of "spirits or ancestors".[64] This use of the term, however, does not accord with the historical misuse of deism as a concept to describe an absent creator god.
|
en/4716.html.txt
ADDED
@@ -0,0 +1,113 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Polytheism is the worship of or belief in multiple deities, which are usually assembled into a pantheon of gods and goddesses, along with their own religions and rituals. In most religions which accept polytheism, the different gods and goddesses are representations of forces of nature or ancestral principles, and can be viewed either as autonomous or as aspects or emanations of a creator deity or transcendental absolute principle (monistic theologies), which manifests immanently in nature (panentheistic and pantheistic theologies).[1] Most of the polytheistic deities of ancient religions, with the notable exceptions of the Ancient Egyptian[2] and Hindu deities, were conceived as having physical bodies.
|
2 |
+
|
3 |
+
Polytheism is a type of theism. Within theism, it contrasts with monotheism, the belief in a singular God, in most cases transcendent. Polytheists do not always worship all the gods equally, but they can be henotheists, specializing in the worship of one particular deity. Other polytheists can be kathenotheists, worshiping different deities at different times.
|
4 |
+
|
5 |
+
Polytheism was the typical form of religion during the Bronze Age and Iron Age up to the Axial Age and the development of Abrahamic religions, the latter of which enforced strict monotheism. It is well documented in historical religions of Classical antiquity, especially ancient Greek religion and ancient Roman religion, and after the decline of Greco-Roman polytheism in tribal religions such as Germanic, Slavic and Baltic paganism.
|
6 |
+
|
7 |
+
Notable polytheistic religions practiced today include Taoism, Shenism or Chinese folk religion, Japanese Shinto, Santería, most Traditional African religions[3] and various neopagan faiths.
|
8 |
+
|
9 |
+
Hinduism denies being exclusively either monotheistic or polytheistic, sometimes believing to be monotheistic but declining to polytheism with many Hindu schools regarding it as henotheistic. The Vedanta philosophy of Hinduism secludes the idea of Hinduism being monotheistic along with the belief that Brahman is the cause of everything and the universe itself being the manifestation of Brahman.
|
10 |
+
|
11 |
+
The term comes from the Greek πολύ poly ("many") and θεός theos ("god") and was first invented by the Jewish writer Philo of Alexandria to argue with the Greeks. When Christianity spread throughout Europe and the Mediterranean, non-Christians were just called Gentiles (a term originally used by Jews to refer to non-Jews) or pagans (locals) or by the clearly pejorative term idolaters (worshiping "false" gods). The modern usage of the term is first revived in French through Jean Bodin in 1580, followed by Samuel Purchas's usage in English in 1614.[4]
|
12 |
+
|
13 |
+
A central, main division in modern polytheistic practices is between soft polytheism and hard polytheism.[5][6]
|
14 |
+
|
15 |
+
"Hard" polytheism is the belief that gods are distinct, separate, real divine beings, rather than psychological archetypes or personifications of natural forces. Hard polytheists reject the idea that "all gods are one god." "Hard" polytheists do not necessarily consider the gods of all cultures as being equally real, a theological position formally known as integrational polytheism or omnism. For hard polytheists, gods are individual and not only different names for the same being.[6]
|
16 |
+
|
17 |
+
This is often contrasted with "soft" polytheism, which holds that different gods may be aspects of only one god, that the pantheons of other cultures are representative of one single pantheon, psychological archetypes or personifications of natural forces.[7] In this way, gods may be interchangeable for one another across cultures.[6]
|
18 |
+
|
19 |
+
The deities of polytheism are often portrayed as complex personages of greater or lesser status, with individual skills, needs, desires and histories; in many ways similar to humans (anthropomorphic) in their personality traits, but with additional individual powers, abilities, knowledge or perceptions.
|
20 |
+
Polytheism cannot be cleanly separated from the animist beliefs prevalent in most folk religions. The gods of polytheism are in many cases the highest order of a continuum of supernatural beings or spirits, which may include ancestors, demons, wights and others. In some cases these spirits are divided into celestial or chthonic classes, and belief in the existence of all these beings does not imply that all are worshipped.
|
21 |
+
|
22 |
+
Types of deities often found in polytheism may include
|
23 |
+
|
24 |
+
In the Classical era, Sallustius (4th century AD) categorised mythology into five types:
|
25 |
+
|
26 |
+
The theological are those myths which use no bodily form but contemplate the very essence of the gods: e.g., Cronus swallowing his children. Since divinity is intellectual, and all intellect returns into itself, this myth expresses in allegory the essence of divinity.
|
27 |
+
|
28 |
+
Myths may be regarded physically when they express the activities of gods in the world.
|
29 |
+
|
30 |
+
The psychological way is to regard (myths as allegories of) the activities of the soul itself and or the soul's acts of thought.
|
31 |
+
|
32 |
+
The material is to regard material objects to actually be gods, for example: to call the earth Gaia, ocean Okeanos, or heat Typhon.
|
33 |
+
|
34 |
+
Some well-known historical polytheistic pantheons include the Sumerian gods and the Egyptian gods, and the classical-attested pantheon which includes the ancient Greek religion and Roman religion. Post-classical polytheistic religions include Norse Æsir and Vanir, the Yoruba Orisha, the Aztec gods, and many others. Today, most historical polytheistic religions are referred to as "mythology",[8] though the stories cultures tell about their gods should be distinguished from their worship or religious practice. For instance deities portrayed in conflict in mythology would still be worshipped sometimes in the same temple side by side, illustrating the distinction in the devotees mind between the myth and the reality. Scholars such as Jaan Puhvel, J. P. Mallory, and Douglas Q. Adams have reconstructed aspects of the ancient Proto-Indo-European religion, from which the religions of the various Indo-European peoples derive, and that this religion was an essentially naturalist numenistic religion. An example of a religious notion from this shared past is the concept of *dyēus, which is attested in several distinct religious systems.
|
35 |
+
|
36 |
+
In many civilizations, pantheons tended to grow over time. Deities first worshipped as the patrons of cities or places came to be collected together as empires extended over larger territories. Conquests could lead to the subordination of the elder culture's pantheon to a newer one, as in the Greek Titanomachia, and possibly also the case of the Æsir and Vanir in the Norse mythos. Cultural exchange could lead to "the same" deity being renowned in two places under different names, as seen with the Greeks, Etruscans, and Romans, and also to the cultural transmission of elements of an extraneous religion into a local cult, as with worship of the ancient Egyptian deity Osiris, which was later followed in ancient Greece.
|
37 |
+
|
38 |
+
Most ancient belief systems held that gods influenced human lives. However, the Greek philosopher Epicurus held that the gods were living, incorruptible, blissful beings who did not trouble themselves with the affairs of mortals, but who could be perceived by the mind, especially during sleep. Epicurus believed that these gods were material, human-like, and that they inhabited the empty spaces between worlds.
|
39 |
+
|
40 |
+
Hellenistic religion may still be regarded as polytheistic, but with strong monistic components, and monotheism finally emerges from Hellenistic traditions in Late Antiquity in the form of Neoplatonism and Christian theology.
|
41 |
+
|
42 |
+
The classical scheme in Ancient Greece of the Twelve Olympians (the Canonical Twelve of art and poetry) were:[10][11] Zeus, Hera, Poseidon, Athena, Ares, Demeter, Apollo, Artemis, Hephaestus, Aphrodite, Hermes, and Hestia. Though it is suggested that Hestia stepped down when Dionysus was invited to Mount Olympus, this is a matter of controversy. Robert Graves' The Greek Myths cites two sources[12][13] that obviously do not suggest Hestia surrendered her seat, though he suggests she did. Hades[14] was often excluded because he dwelt in the underworld. All of the gods had a power. There was, however, a great deal of fluidity as to whom was counted among their number in antiquity.[15] Different cities often worshipped the same deities, sometimes with epithets that distinguished them and specified their local nature.
|
43 |
+
|
44 |
+
The Hellenic Polytheism extended beyond mainland Greece, to the islands and coasts of Ionia in Asia Minor, to Magna Graecia (Sicily and southern Italy), and to scattered Greek colonies in the Western Mediterranean, such as Massalia (Marseille). Greek religion tempered Etruscan cult and belief to form much of the later Roman religion.
|
45 |
+
|
46 |
+
The animistic nature of folk beliefs is an anthropological cultural universal. The belief in ghosts and spirits animating the natural world and the practice of ancestor worship is universally present in the world's cultures and re-emerges in monotheistic or materialistic societies as "superstition", belief in demons, tutelary saints, fairies or extraterrestrials.
|
47 |
+
|
48 |
+
The presence of a full polytheistic religion, complete with a ritual cult conducted by a priestly caste, requires a higher level of organization and is not present in every culture. In Eurasia, the Kalash are one of very few instances of surviving polytheism. Also, a large number of polytheistic folk traditions are subsumed in contemporary Hinduism, although Hinduism is doctrinally dominated by monist or monotheist theology (Bhakti, Advaita). Historical Vedic polytheist ritualism survives as a minor current in Hinduism, known as Shrauta. More widespread is folk Hinduism, with rituals dedicated to various local or regional deities.
|
49 |
+
|
50 |
+
In Buddhism, there are higher beings commonly designed (or designated) as gods, Devas; however, Buddhism, at its core (the original Pali canon), does not teach the notion of praying nor worship to the Devas or any god(s).
|
51 |
+
|
52 |
+
However, in Buddhism, the core leader 'Buddha', who pioneered the path to enlightenment is not worshiped in meditation, but simply reflected upon. Statues or images of the Buddha (Buddharupas) are worshiped in front of to reflect and contemplate on qualities that the particular position of that rupa represents. In Buddhism, there is no creator and the Buddha rejected the idea that a permanent, personal, fixed, omniscient deity can exist, linking into the core concept of impermanence (anicca).
|
53 |
+
|
54 |
+
Devas, in general, are beings who have had more positive karma in their past lives than humans. Their lifespan eventually ends. When their lives end, they will be reborn as devas or as other beings. When they accumulate negative karma, they are reborn as either human or any of the other lower beings. Humans and other beings could also be reborn as a deva in their next rebirth, if they accumulate enough positive karma; however, it is not recommended.
|
55 |
+
|
56 |
+
Buddhism flourished in different countries, and some of those countries have polytheistic folk religions. Buddhism syncretizes easily with other religions. Thus, Buddhism has mixed with the folk religions and emerged in polytheistic variants (such as Vajrayana) as well as non-theistic variants. For example, in Japan, Buddhism, mixed with Shinto, which worships deities called kami, created a tradition which prays to the deities of Shinto as forms of Buddhas. Thus, there may be elements of worship of gods in some forms of later Buddhism.
|
57 |
+
|
58 |
+
The concepts of Adi-Buddha and Dharmakaya are the closest to monotheism any form of Buddhism comes, all famous sages and Bodhisattvas being regarded as reflections of it.[clarification needed]
|
59 |
+
Adi-Buddha is not said to be the creator, but the originator of all things, being a deity in an Emanationist sense.
|
60 |
+
|
61 |
+
Although Christianity is officially considered a monotheistic religion,[16][17] it is sometimes claimed that Christianity is not truly monotheistic because of its teaching about the Trinity,[18] which believes in a God revealed in three different persons, namely the Father, the Son and the Holy Spirit. This is the position of some Jews and Muslims who contend that because of the adoption of a Triune conception of deity, Christianity is actually a form of Tritheism or Polytheism,[19][20] for example see Shituf or Tawhid. However, the central doctrine of Christianity is that "one God exists in Three Persons and One Substance".[21] Strictly speaking, the doctrine is a revealed mystery which while above reason is not contrary to it.[clarification needed][21] The word 'person' is an imperfect translation of the original term "hypostasis". In everyday speech "person" denotes a separate rational and moral individual, possessed of self-consciousness, and aware of individual identity despite changes. A human person is a distinct individual essence in whom human nature is individualized. But in God there are no three individuals alongside of, and separate from, one another, but only personal self distinctions[clarification needed] within the divine essence, which is not only generically[clarification needed], but also numerically, one.[22] Although the doctrine of the Trinity was not definitely formulated before the First Council of Constantinople in 381, the doctrine of one God, inherited from Judaism was always the indubitable premise of the Church's faith.[23]
|
62 |
+
|
63 |
+
Jordan Paper, a Western scholar and self-described polytheist, considers polytheism to be the normal state in human culture. He argues that "Even the Catholic Church shows polytheistic aspects with the 'worshipping' of the saints." On the other hand, he complains, monotheistic missionaries and scholars were eager to see a proto-monotheism or at least henotheism in polytheistic religions, for example, when taking from the Chinese pair of Sky and Earth only one part and calling it the King of Heaven, as Matteo Ricci did.[24]
|
64 |
+
|
65 |
+
Joseph Smith, the founder of the Latter Day Saint movement, believed in "the plurality of Gods", saying "I have always declared God to be a distinct personage, Jesus Christ a separate and distinct personage from God the Father, and that the Holy Ghost was a distinct personage and a Spirit: and these three constitute three distinct personages and three Gods".[25] Mormonism also affirms the existence of a Heavenly Mother,[26] as well as exaltation, the idea that people can become like god in the afterlife,[27] and the prevailing view among Mormons is that God the Father was once a man who lived on a planet with his own higher God, and who became perfect after following this higher God.[28][29] Some critics of Mormonism argue that statements in the Book of Mormon describe a trinitarian conception of God (e.g. ‹The template LDS is being considered for deletion.› 2 Nephi 31:21; ‹The template LDS is being considered for deletion.› Alma 11:44), but were superseded by later revelations.[30]
|
66 |
+
|
67 |
+
Mormons teach that scriptural statements on the unity of the Father, the Son, and the Holy Ghost represent a oneness of purpose, not of substance.[31] They believe that the early Christian church did not characterize divinity in terms of an immaterial, formless shared substance until post-apostolic theologians began to incorporate Greek metaphysical philosophies (such as Neoplatonism) into Christian doctrine.[32][33] Mormons believe that the truth about God's nature was restored through modern day revelation, which reinstated the original Judeo-Christian concept of a natural, corporeal, immortal God,[34] who is the literal Father of the spirits of humans.[35] It is to this personage alone that Mormons pray, as He is and always will be their Heavenly Father, the supreme "God of gods" (Deuteronomy 10:17). In the sense that Mormons worship only God the Father, they consider themselves monotheists. Nevertheless, Mormons adhere to Christ's teaching that those who receive God's word can obtain the title of "gods" (John 10:33–36), because as literal children of God they can take upon themselves His divine attributes.[36] Mormons teach that "The glory of God is intelligence" (Doctrine and Covenants 93:36), and that it is by sharing the Father's perfect comprehension of all things that both Jesus Christ and the Holy Spirit are also divine.[37]
|
68 |
+
|
69 |
+
Hinduism is not a monolithic religion: many extremely varied religious traditions and practices are grouped together under this umbrella term and some modern scholars have questioned the legitimacy of unifying them artificially and suggest that one should speak of "Hinduisms" in the plural.[38] Theistic Hinduism encompasses both monotheistic and polytheistic tendencies and variations on or mixes of both structures.
|
70 |
+
|
71 |
+
Hindus venerate deities in the form of the murti, or idol. The Puja (worship) of the murti is like a way to communicate with the formless, abstract divinity (Brahman in Hinduism) which creates, sustains and dissolves creation. However, there are sects who have advocated that there is no need of giving a shape to God and it is omnipresent and beyond the things which human can see or feel tangibly. Specially the Arya Samaj founded by Swami Dayananda Saraswati and Brahmo Samaj founder by Ram Mohan Roy (there are others also) do not worship deities. Arya Samaj favours Vedic chants and Havan, while Brahmo Samaj stresses simple prayers.[citation needed]
|
72 |
+
|
73 |
+
Some Hindu philosophers and theologians argue for a transcendent metaphysical structure with a single divine essence.[citation needed] This divine essence is usually referred to as Brahman or Atman, but the understanding of the nature of this absolute divine essence is the line which defines many Hindu philosophical traditions such as Vedanta.
|
74 |
+
|
75 |
+
Among lay Hindus, some believe in different deities emanating from Brahman, while others practice more traditional polytheism and henotheism, focusing their worship on one or more personal deities, while granting the existence of others.
|
76 |
+
|
77 |
+
Academically speaking, the ancient Vedic scriptures, upon which Hinduism is derived, describe four authorized disciplic lines of teaching coming down over thousands of years. (Padma Purana). Four of them propound that the Absolute Truth is Fully Personal, as in Judeo-Christian theology. That the Primal Original God is Personal, both transcendent and immanent throughout creation. He can be, and is often approached through worship of Murtis, called "Archa-Vigraha", which are described in the Vedas as likenesses of His various dynamic, spiritual Forms. This is the Vaisnava theology.
|
78 |
+
|
79 |
+
The fifth disciplic line of Vedic spirituality, founded by Adi Shankaracharya, promotes the concept that the Absolute is Brahman, without clear differentiations, without will, without thought, without intelligence.
|
80 |
+
|
81 |
+
In the Smarta denomination of Hinduism, the philosophy of Advaita expounded by Shankara allows veneration of numerous deities[citation needed] with the understanding that all of them are but manifestations of one impersonal divine power, Brahman. Therefore, according to various schools of Vedanta including Shankara, which is the most influential and important Hindu theological tradition, there are a great number of deities in Hinduism, such as Vishnu, Shiva, Ganesha, Hanuman, Lakshmi, and Kali, but they are essentially different forms of the same "Being".[citation needed] However, many Vedantic philosophers also argue that all individuals were united by the same impersonal, divine power in the form of the Atman.
|
82 |
+
|
83 |
+
Many other Hindus, however, view polytheism as far preferable to monotheism. Ram Swarup, for example, points to the Vedas as being specifically polytheistic,[39] and states that, "only some form of polytheism alone can do justice to this variety and richness."[40] Sita Ram Goel, another 20th-century Hindu historian, wrote:
|
84 |
+
|
85 |
+
"I had an occasion to read the typescript of a book [Ram Swarup] had finished writing in 1973. It was a profound study of Monotheism, the central dogma of both Islam and Christianity, as well as a powerful presentation of what the monotheists denounce as Hindu Polytheism. I had never read anything like it. It was a revelation to me that Monotheism was not a religious concept but an imperialist idea. I must confess that I myself had been inclined towards Monotheism till this time. I had never thought that a multiplicity of Gods was the natural and spontaneous expression of an evolved consciousness."[41]
|
86 |
+
|
87 |
+
Some Hindus construe this notion of polytheism in the sense of polymorphism—one God with many forms or names. The Rig Veda, the primary Hindu scripture, elucidates this as follows:
|
88 |
+
|
89 |
+
They call him Indra, Mitra, Varuna, Agni, and he is heavenly nobly-winged Garutman. To what is One, sages give many a title they call it Agni, Yama, Matarisvan. Book I, Hymn 164, Verse 46 Rigveda[42]
|
90 |
+
|
91 |
+
Neopaganism, also known as modern paganism and contemporary paganism,[43] is a group of contemporary religious movements influenced by or claiming to be derived from the various historical pagan beliefs of pre-modern Europe.[44][45] Although they do share commonalities, contemporary Pagan religious movements are diverse and no single set of beliefs, practices, or texts are shared by them all.[46]
|
92 |
+
|
93 |
+
English occultist Dion Fortune was a major populiser of soft polytheism. In her novel, The Sea Priestess, she wrote, "All gods are one god, and all goddesses are one goddess, and there is one initiator."[47]
|
94 |
+
|
95 |
+
Reconstructionist polytheists apply scholarly disciplines such as history, archaeology and language study to revive ancient, traditional religions that have been fragmented, damaged or even destroyed, such as Norse Paganism, Greek Paganism, Celtic polytheism and others. A reconstructionist endeavours to revive and reconstruct an authentic practice, based on the ways of the ancestors but workable in contemporary life. These polytheists sharply differ from neopagans in that they consider their religion not only inspired by the religions of antiquity but often as an actual continuation or revival of those religions.[48][self-published source?]
|
96 |
+
|
97 |
+
Wicca is a duotheistic faith created by Gerald Gardner that allows for polytheism.[49][50][51] Wiccans specifically worship the Lord and Lady of the Isles (their names are oathbound).[50][51][52][53] It is an orthopraxic mystery religion that requires initiation to the priesthood in order to consider oneself Wiccan.[50][51][54] Wicca emphasizes duality and the cycle of nature.[50][51][55]
|
98 |
+
|
99 |
+
In Africa, polytheism in Serer religion dates as far back to the Neolithic Era (possibly earlier) when the ancient ancestors of the Serer people represented their Pangool on the Tassili n'Ajjer.[9] The supreme creator deity in Serer religion is Roog. However, there are many deities[56] and Pangool (singular : Fangool, the interceders with the divine) in Serer religion.[9] Each one has its own purpose and serves as Roog's agent on Earth.[56] Amongst the Cangin speakers, a sub-group of the Serers, Roog is known as Koox.[57]
|
100 |
+
|
101 |
+
The term 'polytheist' is sometimes used by Sunni Muslim extremist groups such as Islamic State of Iraq and the Levant (ISIL) as a derogatory reference to Shiite Muslims, whom they view as having "strayed from Islam’s monotheistic creed because of the reverence they show for historical figures, like Imam Ali".[58]
|
102 |
+
|
103 |
+
Polydeism (from the Greek πολύ poly ("many") and Latin deus meaning god) is a portmanteau referencing a polytheistic form of deism, encompassing the belief that the universe was the collective creation of multiple gods, each of whom created a piece of the universe or multiverse and then ceased to intervene in its evolution. This concept addresses an apparent contradiction in deism, that a monotheistic God created the universe, but now expresses no apparent interest in it, by supposing that if the universe is the construct of many gods, none of them would have an interest in the universe as a whole.
|
104 |
+
|
105 |
+
Creighton University Philosophy professor William O. Stephens,[59] who has taught this concept, suggests that C. D. Broad projected this concept[60] in Broad's 1925 article, "The Validity of Belief in a Personal God".[61] Broad noted that the arguments for the existence of God only tend to prove that "a designing mind had existed in the past, not that it does exist now. It is quite compatible with this argument that God should have died long ago, or that he should have turned his attention to other parts of the Universe", and notes in the same breath that "there is nothing in the facts to suggest that there is only one such being".[62] Stephens contends that Broad, in turn, derived the concept from David Hume. Stephens states:
|
106 |
+
|
107 |
+
David Hume's criticisms of the argument from design include the argument that, for all we know, a committee of very powerful, but not omnipotent, divine beings could have collaborated in creating the world, but then afterwards left it alone or even ceased to exist. This would be polydeism.
|
108 |
+
|
109 |
+
This use of the term appears to originate at least as early as Robert M. Bowman Jr.'s 1997 essay, Apologetics from Genesis to Revelation.[63] Bowman wrote:
|
110 |
+
|
111 |
+
Materialism (illustrated by the Epicureans), represented today by atheism, skepticism, and deism. The materialist may acknowledge superior beings, but they do not believe in a Supreme Being. Epicureanism was founded about 300 BC by Epicurus. Their world view might be called "polydeism:" there are many gods, but they are merely superhuman beings; they are remote, uninvolved in the world, posing no threat and offering no hope to human beings. Epicureans regarded traditional religion and idolatry as harmless enough as long as the gods were not feared or expected to do or say anything.
|
112 |
+
|
113 |
+
Sociologist Susan Starr Sered used the term in her 1994 book, Priestess, Mother, Sacred Sister: Religions Dominated by Women, which includes a chapter titled, "No Father in Heaven: Androgyny and Polydeism". Sered states therein that she has "chosen to gloss on 'polydeism' a range of beliefs in more than one supernatural entity."[64] Sered used this term in a way that would encompass polytheism, rather than exclude much of it, as she intended to capture both polytheistic systems and nontheistic systems that assert the influence of "spirits or ancestors".[64] This use of the term, however, does not accord with the historical misuse of deism as a concept to describe an absent creator god.
|
en/4717.html.txt
ADDED
@@ -0,0 +1,180 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
The potato is a root vegetable native to the Americas, a starchy tuber of the plant Solanum tuberosum, and the plant itself is a perennial in the nightshade family, Solanaceae.[2]
|
6 |
+
|
7 |
+
Wild potato species, originating in modern-day Peru, can be found throughout the Americas, from the United States to southern Chile.[3] The potato was originally believed to have been domesticated by indigenous peoples of the Americas independently in multiple locations,[4] but later genetic testing of the wide variety of cultivars and wild species traced a single origin for potatoes. In the area of present-day southern Peru and extreme northwestern Bolivia, from a species in the Solanum brevicaule complex, potatoes were domesticated approximately 7,000–10,000 years ago.[5][6][7] In the Andes region of South America, where the species is indigenous, some close relatives of the potato are cultivated.
|
8 |
+
|
9 |
+
Potatoes were introduced to Europe from the Americas in the second half of the 16th century by the Spanish. Today they are a staple food in many parts of the world and an integral part of much of the world's food supply. As of 2014, potatoes were the world's fourth-largest food crop after maize (corn), wheat, and rice.[8]
|
10 |
+
|
11 |
+
Following millennia of selective breeding, there are now over 5,000 different types of potatoes.[6] Over 99% of presently cultivated potatoes worldwide descended from varieties that originated in the lowlands of south-central Chile.[9][10]
|
12 |
+
|
13 |
+
The importance of the potato as a food source and culinary ingredient varies by region and is still changing. It remains an essential crop in Europe, especially Northern and Eastern Europe, where per capita production is still the highest in the world, while the most rapid expansion in production over the past few decades has occurred in southern and eastern Asia, with China and India leading the world in overall production as of 2018.
|
14 |
+
|
15 |
+
Like the tomato, the potato is a nightshade in the genus Solanum, and the vegetative and fruiting parts of the potato contain the toxin solanine which is dangerous for human consumption. Normal potato tubers that have been grown and stored properly produce glycoalkaloids in amounts small enough to be negligible to human health, but if green sections of the plant (namely sprouts and skins) are exposed to light, the tuber can accumulate a high enough concentration of glycoalkaloids to affect human health.[11][12]
|
16 |
+
|
17 |
+
The English word potato comes from Spanish patata (the name used in Spain). The Royal Spanish Academy says the Spanish word is a hybrid of the Taíno batata ('sweet potato') and the Quechua papa ('potato').[13][14] The name originally referred to the sweet potato although the two plants are not closely related. The 16th-century English herbalist John Gerard referred to sweet potatoes as common potatoes, and used the terms bastard potatoes and Virginia potatoes for the species we now call potato.[15] In many of the chronicles detailing agriculture and plants, no distinction is made between the two.[16] Potatoes are occasionally referred to as Irish potatoes or white potatoes in the United States, to distinguish them from sweet potatoes.[15]
|
18 |
+
|
19 |
+
The name spud for a small potato comes from the digging of soil (or a hole) prior to the planting of potatoes. The word has an unknown origin and was originally (c. 1440) used as a term for a short knife or dagger, probably related to the Latin spad- a word root meaning "sword"; compare Spanish espada, English "spade", and spadroon. It subsequently transferred over to a variety of digging tools. Around 1845, the name transferred to the tuber itself, the first record of this usage being in New Zealand English.[17] The origin of the word spud has erroneously been attributed to an 18th-century activist group dedicated to keeping the potato out of Britain, calling itself The Society for the Prevention of Unwholesome Diet (S.P.U.D.). It was Mario Pei's 1949 The Story of Language that can be blamed for the word's false origin. Pei writes, "the potato, for its part, was in disrepute some centuries ago. Some Englishmen who did not fancy potatoes formed a Society for the Prevention of Unwholesome Diet. The initials of the main words in this title gave rise to spud." Like most other pre-20th century acronymic origins, this is false, and there is no evidence that a Society for the Prevention of Unwholesome Diet ever existed.[18][14]
|
20 |
+
|
21 |
+
Potato plants are herbaceous perennials that grow about 60 cm (24 in) high, depending on variety, with the leaves dying back after flowering, fruiting and tuber formation. They bear white, pink, red, blue, or purple flowers with yellow stamens. In general, the tubers of varieties with white flowers have white skins, while those of varieties with colored flowers tend to have pinkish skins.[19] Potatoes are mostly cross-pollinated by insects such as bumblebees, which carry pollen from other potato plants, though a substantial amount of self-fertilizing occurs as well. Tubers form in response to decreasing day length, although this tendency has been minimized in commercial varieties.[20]
|
22 |
+
|
23 |
+
After flowering, potato plants produce small green fruits that resemble green cherry tomatoes, each containing about 300 seeds. Like all parts of the plant except the tubers, the fruit contain the toxic alkaloid solanine and are therefore unsuitable for consumption. All new potato varieties are grown from seeds, also called "true potato seed", "TPS" or "botanical seed" to distinguish it from seed tubers. New varieties grown from seed can be propagated vegetatively by planting tubers, pieces of tubers cut to include at least one or two eyes, or cuttings, a practice used in greenhouses for the production of healthy seed tubers. Plants propagated from tubers are clones of the parent, whereas those propagated from seed produce a range of different varieties.
|
24 |
+
|
25 |
+
There are about 5,000 potato varieties worldwide. Three thousand of them are found in the Andes alone, mainly in Peru, Bolivia, Ecuador, Chile, and Colombia. They belong to eight or nine species, depending on the taxonomic school. Apart from the 5,000 cultivated varieties, there are about 200 wild species and subspecies, many of which can be cross-bred with cultivated varieties. Cross-breeding has been done repeatedly to transfer resistances to certain pests and diseases from the gene pool of wild species to the gene pool of cultivated potato species. Genetically modified varieties have met public resistance in the United States and in the European Union.[21][22]
|
26 |
+
|
27 |
+
The major species grown worldwide is Solanum tuberosum (a tetraploid with 48 chromosomes), and modern varieties of this species are the most widely cultivated. There are also four diploid species (with 24 chromosomes): S. stenotomum, S. phureja, S. goniocalyx, and S. ajanhuiri. There are two triploid species (with 36 chromosomes): S. chaucha and S. juzepczukii. There is one pentaploid cultivated species (with 60 chromosomes): S. curtilobum. There are two major subspecies of Solanum tuberosum: andigena, or Andean; and tuberosum, or Chilean.[23] The Andean potato is adapted to the short-day conditions prevalent in the mountainous equatorial and tropical regions where it originated; the Chilean potato, however, native to the Chiloé Archipelago, is adapted to the long-day conditions prevalent in the higher latitude region of southern Chile.[24]
|
28 |
+
|
29 |
+
The International Potato Center, based in Lima, Peru, holds an ISO-accredited collection of potato germplasm.[25] The international Potato Genome Sequencing Consortium announced in 2009 that they had achieved a draft sequence of the potato genome.[26] The potato genome contains 12 chromosomes and 860 million base pairs, making it a medium-sized plant genome.[27] More than 99 percent of all current varieties of potatoes currently grown are direct descendants of a subspecies that once grew in the lowlands of south-central Chile.[28] Nonetheless, genetic testing of the wide variety of cultivars and wild species affirms that all potato subspecies derive from a single origin in the area of present-day southern Peru and extreme Northwestern Bolivia (from a species in the Solanum brevicaule complex).[5][6][7] The wild Crop Wild Relatives Prebreeding project encourages the use of wild relatives in breeding programs. Enriching and preserving the gene bank collection to make potatoes adaptive to diverse environmental conditions is seen as a pressing issue due to climate change.[29]
|
30 |
+
|
31 |
+
Most modern potatoes grown in North America arrived through European settlement and not independently from the South American sources, although at least one wild potato species, Solanum fendleri, naturally ranges from Peru into Texas, where it is used in breeding for resistance to a nematode species that attacks cultivated potatoes. A secondary center of genetic variability of the potato is Mexico, where important wild species that have been used extensively in modern breeding are found, such as the hexaploid Solanum demissum, as a source of resistance to the devastating late blight disease.[30] Another relative native to this region, Solanum bulbocastanum, has been used to genetically engineer the potato to resist potato blight.[31]
|
32 |
+
|
33 |
+
Potatoes yield abundantly with little effort, and adapt readily to diverse climates as long as the climate is cool and moist enough for the plants to gather sufficient water from the soil to form the starchy tubers. Potatoes do not keep very well in storage and are vulnerable to moulds that feed on the stored tubers and quickly turn them rotten, whereas crops such as grain can be stored for several years with a low risk of rot. The food energy yield of potatoes – about 95 gigajoules per hectare (9.2 million kilocalories per acre) – is higher than that of maize (78 GJ/ha or 7.5×10^6 kcal/acre), rice (77 GJ/ha or 7.4×10^6 kcal/acre), wheat (31 GJ/ha or 3×10^6 kcal/acre), or soybeans (29 GJ/ha or 2.8×10^6 kcal/acre).[32]
|
34 |
+
|
35 |
+
There are close to 4,000 varieties of potato including common commercial varieties, each of which has specific agricultural or culinary attributes.[33] Around 80 varieties are commercially available in the UK.[34] In general, varieties are categorized into a few main groups based on common characteristics, such as russet potatoes (rough brown skin), red potatoes, white potatoes, yellow potatoes (also called Yukon potatoes) and purple potatoes.
|
36 |
+
|
37 |
+
For culinary purposes, varieties are often differentiated by their waxiness: floury or mealy baking potatoes have more starch (20–22%) than waxy boiling potatoes (16–18%). The distinction may also arise from variation in the comparative ratio of two different potato starch compounds: amylose and amylopectin. Amylose, a long-chain molecule, diffuses from the starch granule when cooked in water, and lends itself to dishes where the potato is mashed. Varieties that contain a slightly higher amylopectin content, which is a highly branched molecule, help the potato retain its shape after being boiled in water.[35] Potatoes that are good for making potato chips or potato crisps are sometimes called "chipping potatoes", which means they meet the basic requirements of similar varietal characteristics, being firm, fairly clean, and fairly well-shaped.[36]
|
38 |
+
|
39 |
+
The European Cultivated Potato Database (ECPD) is an online collaborative database of potato variety descriptions that is updated and maintained by the Scottish Agricultural Science Agency within the framework of the European Cooperative Programme for Crop Genetic Resources Networks (ECP/GR)—which is run by the International Plant Genetic Resources Institute (IPGRI).[37]
|
40 |
+
|
41 |
+
Dozens of potato cultivars have been selectively bred specifically for their skin or, more commonly, flesh color, including gold, red, and blue varieties[38] that contain varying amounts of phytochemicals, including carotenoids for gold/yellow or polyphenols for red or blue cultivars.[39] Carotenoid compounds include provitamin A alpha-carotene and beta-carotene, which are converted to the essential nutrient, vitamin A, during digestion. Anthocyanins mainly responsible for red or blue pigmentation in potato cultivars do not have nutritional significance, but are used for visual variety and consumer appeal.[40] Recently, as of 2010, potatoes have also been bioengineered specifically for these pigmentation traits.[41]
|
42 |
+
|
43 |
+
Genetic research has produced several genetically modified varieties. 'New Leaf', owned by Monsanto Company, incorporates genes from Bacillus thuringiensis, which confers resistance to the Colorado potato beetle; 'New Leaf Plus' and 'New Leaf Y', approved by US regulatory agencies during the 1990s, also include resistance to viruses. McDonald's, Burger King, Frito-Lay, and Procter & Gamble announced they would not use genetically modified potatoes, and Monsanto published its intent to discontinue the line in March 2001.[42]
|
44 |
+
|
45 |
+
Waxy potato varieties produce two main kinds of potato starch, amylose and amylopectin, the latter of which is most industrially useful. BASF developed the Amflora potato, which was modified to express antisense RNA to inactivate the gene for granule bound starch synthase, an enzyme which catalyzes the formation of amylose.[43] Amflora potatoes therefore produce starch consisting almost entirely of amylopectin, and are thus more useful for the starch industry. In 2010, the European Commission cleared the way for 'Amflora' to be grown in the European Union for industrial purposes only—not for food. Nevertheless, under EU rules, individual countries have the right to decide whether they will allow this potato to be grown on their territory. Commercial planting of 'Amflora' was expected in the Czech Republic and Germany in the spring of 2010, and Sweden and the Netherlands in subsequent years.[44] Another GM potato variety developed by BASF is 'Fortuna' which was made resistant to late blight by adding two resistance genes, blb1 and blb2, which originate from the Mexican wild potato Solanum bulbocastanum.[45][46] In October 2011 BASF requested cultivation and marketing approval as a feed and food from the EFSA. In 2012, GMO development in Europe was stopped by BASF.[47][48]
|
46 |
+
|
47 |
+
In November 2014, the USDA approved a genetically modified potato developed by J.R. Simplot Company, which contains genetic modifications that prevent bruising and produce less acrylamide when fried than conventional potatoes; the modifications do not cause new proteins to be made, but rather prevent proteins from being made via RNA interference.[49][50][51]
|
48 |
+
|
49 |
+
The potato was first domesticated in the region of modern-day southern Peru and northwestern Bolivia[5] between 8000 and 5000 BC.[6] It has since spread around the world and become a staple crop in many countries.
|
50 |
+
|
51 |
+
The earliest archaeologically verified potato tuber remains have been found at the coastal site of Ancon (central Peru), dating to 2500 BC.[52][53] The most widely cultivated variety, Solanum tuberosum tuberosum, is indigenous to the Chiloé Archipelago, and has been cultivated by the local indigenous people since before the Spanish conquest.[24][54]
|
52 |
+
|
53 |
+
According to conservative estimates, the introduction of the potato was responsible for a quarter of the growth in Old World population and urbanization between 1700 and 1900.[55] In the Altiplano, potatoes provided the principal energy source for the Inca civilization, its predecessors, and its Spanish successor. Following the Spanish conquest of the Inca Empire, the Spanish introduced the potato to Europe in the second half of the 16th century, part of the Columbian exchange. The staple was subsequently conveyed by European mariners to territories and ports throughout the world. The potato was slow to be adopted by European farmers, but soon enough it became an important food staple and field crop that played a major role in the European 19th century population boom.[7] However, lack of genetic diversity, due to the very limited number of varieties initially introduced, left the crop vulnerable to disease. In 1845, a plant disease known as late blight, caused by the fungus-like oomycete Phytophthora infestans, spread rapidly through the poorer communities of western Ireland as well as parts of the Scottish Highlands, resulting in the crop failures that led to the Great Irish Famine.[30] Thousands of varieties still persist in the Andes however, where over 100 cultivars might be found in a single valley, and a dozen or more might be maintained by a single agricultural household.[56]
|
54 |
+
|
55 |
+
In 2018, world production of potatoes was 368 million tonnes, led by China with 27% of the total (table). Other major producers were India, Russia, Ukraine and the United States. It remains an essential crop in Europe (especially northern and eastern Europe), where per capita production is still the highest in the world, but the most rapid expansion over the past few decades has occurred in southern and eastern Asia.[8][57]
|
56 |
+
|
57 |
+
A raw potato is 79% water, 17% carbohydrates (88% is starch), 2% protein, and contains negligible fat (see table). In a 100-gram (3 1⁄2-ounce) portion, raw potato provides 322 kilojoules (77 kilocalories) of food energy and is a rich source of vitamin B6 and vitamin C (23% and 24% of the Daily Value, respectively), with no other vitamins or minerals in significant amount (see table). The potato is rarely eaten raw because raw potato starch is poorly digested by humans.[58] When a potato is baked, its contents of vitamin B6 and vitamin C decline notably, while there is little significant change in the amount of other nutrients.[59]
|
58 |
+
|
59 |
+
Potatoes are often broadly classified as having a high glycemic index (GI) and so are often excluded from the diets of individuals trying to follow a low-GI diet. The GI of potatoes can vary considerably depending on the cultivar or cultivar category (such as "red", russet, "white", or King Edward), growing conditions and storage, preparation methods (by cooking method, whether it is eaten hot or cold, whether it is mashed or cubed or consumed whole), and accompanying foods consumed (especially the addition of various high-fat or high-protein toppings).[60] In particular, consuming reheated or cooled potatoes that were previously cooked may yield a lower GI effect.[60]
|
60 |
+
|
61 |
+
In the UK, potatoes are not considered by the National Health Service (NHS) as counting or contributing towards the recommended daily five portions of fruit and vegetables, the 5-A-Day program.[61]
|
62 |
+
|
63 |
+
This table shows the nutrient content of potatoes next to other major staple foods, each one measured in its respective raw state, even though staple foods are not commonly eaten raw and are usually sprouted or cooked before eating. In sprouted and cooked form, the relative nutritional and anti-nutritional contents of each of these grains (or other foods) may be different from the values in this table. Each nutrient (every row) has the highest number highlighted to show the staple food with the greatest amount in a 100-gram raw portion.
|
64 |
+
|
65 |
+
A raw yellow dent corn
|
66 |
+
B raw unenriched long-grain white rice
|
67 |
+
C raw hard red winter wheat
|
68 |
+
D raw potato with flesh and skin
|
69 |
+
E raw cassava
|
70 |
+
F raw green soybeans
|
71 |
+
G raw sweet potato
|
72 |
+
H raw sorghum
|
73 |
+
Y raw yam
|
74 |
+
Z raw plantains
|
75 |
+
/* unofficial
|
76 |
+
|
77 |
+
Potatoes contain toxic compounds known as glycoalkaloids, of which the most prevalent are solanine and chaconine. Solanine is found in other plants in the same family, Solanaceae, which includes such plants as deadly nightshade (Atropa belladonna), henbane (Hyoscyamus niger) and tobacco (Nicotiana spp.), as well as the food plants eggplant and tomato. These compounds, which protect the potato plant from its predators, are generally concentrated in its leaves, flowers, sprouts, and fruits (in contrast to the tubers).[63] In a summary of several studies, the glycoalkaloid content was highest in the flowers and sprouts and lowest in the tuber flesh. (The glycoalkaloid content was, in order from highest to lowest: flowers, sprouts, leaves, skin[clarification needed], roots, berries, peel [skin plus outer cortex of tuber flesh], stems, and tuber flesh).[11]
|
78 |
+
|
79 |
+
Exposure to light, physical damage, and age increase glycoalkaloid content within the tuber.[12] Cooking at high temperatures—over 170 °C (338 °F)—partly destroys these compounds. The concentration of glycoalkaloids in wild potatoes is sufficient to produce toxic effects in humans. Glycoalkaloid poisoning may cause headaches, diarrhea, cramps, and, in severe cases, coma and death. However, poisoning from cultivated potato varieties is very rare. Light exposure causes greening from chlorophyll synthesis, giving a visual clue as to which areas of the tuber may have become more toxic. However, this does not provide a definitive guide, as greening and glycoalkaloid accumulation can occur independently of each other.
|
80 |
+
|
81 |
+
Different potato varieties contain different levels of glycoalkaloids. The Lenape variety was released in 1967 but was withdrawn in 1970 as it contained high levels of glycoalkaloids.[64] Since then, breeders developing new varieties test for this, and sometimes have to discard an otherwise promising cultivar. Breeders try to keep glycoalkaloid levels below 200 mg/kg (200 ppmw). However, when these commercial varieties turn green, they can still approach solanine concentrations of 1000 mg/kg (1000 ppmw). In normal potatoes, analysis has shown solanine levels may be as little as 3.5% of the breeders' maximum, with 7–187 mg/kg being found.[65] While a normal potato tuber has 12–20 mg/kg of glycoalkaloid content, a green potato tuber contains 250–280 mg/kg and its skin has 1500–2200 mg/kg.[66]
|
82 |
+
|
83 |
+
Potatoes are generally grown from seed potatoes, tubers specifically grown to be free from disease and to provide consistent and healthy plants. To be disease free, the areas where seed potatoes are grown are selected with care. In the US, this restricts production of seed potatoes to only 15 states out of all 50 states where potatoes are grown.[67] These locations are selected for their cold, hard winters that kill pests and summers with long sunshine hours for optimum growth. In the UK, most seed potatoes originate in Scotland, in areas where westerly winds reduce aphid attack and the spread of potato virus pathogens.[68][failed verification]
|
84 |
+
|
85 |
+
Potato growth can be divided into five phases. During the first phase, sprouts emerge from the seed potatoes and root growth begins. During the second, photosynthesis begins as the plant develops leaves and branches above-ground and stolons develop from lower leaf axils on the below-ground stem. In the third phase the tips of the stolons swell forming new tubers and the shoots continue to grow and flowers typically develop soon after. Tuber bulking occurs during the fourth phase, when the plant begins investing the majority of its resources in its newly formed tubers. At this phase, several factors are critical to a good yield: optimal soil moisture and temperature, soil nutrient availability and balance, and resistance to pest attacks. The fifth phase is the maturation of the tubers: the plant canopy dies back, the tuber skins harden, and the sugars in the tubers convert to starches.[69][70]
|
86 |
+
|
87 |
+
New tubers may start growing at the surface of the soil. Since exposure to light leads to an undesirable greening of the skins and the development of solanine as a protection from the sun's rays, growers cover surface tubers. Commercial growers cover them by piling additional soil around the base of the plant as it grows (called "hilling" up, or in British English "earthing up"). An alternative method, used by home gardeners and smaller-scale growers, involves covering the growing area with organic mulches such as straw or plastic sheets.[71]
|
88 |
+
|
89 |
+
Correct potato husbandry can be an arduous task in some circumstances. Good ground preparation, harrowing, plowing, and rolling are always needed, along with a little grace from the weather and a good source of water.[72] Three successive plowings, with associated harrowing and rolling, are desirable before planting. Eliminating all root-weeds is desirable in potato cultivation. In general, the potatoes themselves are grown from the eyes of another potato and not from seed. Home gardeners often plant a piece of potato with two or three eyes in a hill of mounded soil. Commercial growers plant potatoes as a row crop using seed tubers, young plants or microtubers and may mound the entire row. Seed potato crops are rogued in some countries to eliminate diseased plants or those of a different variety from the seed crop.
|
90 |
+
|
91 |
+
Potatoes are sensitive to heavy frosts, which damage them in the ground. Even cold weather makes potatoes more susceptible to bruising and possibly later rotting, which can quickly ruin a large stored crop.
|
92 |
+
|
93 |
+
The historically significant Phytophthora infestans (late blight) remains an ongoing problem in Europe[30][73] and the United States.[74] Other potato diseases include Rhizoctonia, Sclerotinia, black leg, powdery mildew, powdery scab and leafroll virus.
|
94 |
+
|
95 |
+
Insects that commonly transmit potato diseases or damage the plants include the Colorado potato beetle, the potato tuber moth, the green peach aphid (Myzus persicae), the potato aphid, beet leafhoppers, thrips, and mites. The potato cyst nematode is a microscopic worm that thrives on the roots, thus causing the potato plants to wilt. Since its eggs can survive in the soil for several years, crop rotation is recommended.
|
96 |
+
|
97 |
+
During the crop year 2008, many of the certified organic potatoes produced in the United Kingdom and certified by the Soil Association as organic were sprayed with a copper pesticide[75] to control potato blight (Phytophthora infestans). According to the Soil Association, the total copper that can be applied to organic land is 6 kg/ha/year.[76]
|
98 |
+
|
99 |
+
According to an Environmental Working Group analysis of USDA and FDA pesticide residue tests performed from 2000 through 2008, 84% of the 2,216 tested potato samples contained detectable traces of at least one pesticide. A total of 36 unique pesticides were detected on potatoes over the 2,216 samples, though no individual sample contained more than 6 unique pesticide traces, and the average was 1.29 detectable unique pesticide traces per sample. The average quantity of all pesticide traces found in the 2,216 samples was 1.602 ppm. While this was a very low value of pesticide residue, it was the highest amongst the 50 vegetables analyzed.[77]
|
100 |
+
|
101 |
+
At harvest time, gardeners usually dig up potatoes with a long-handled, three-prong "grape" (or graip), i.e., a spading fork, or a potato hook, which is similar to the graip but with tines at a 90° angle to the handle. In larger plots, the plow is the fastest implement for unearthing potatoes. Commercial harvesting is typically done with large potato harvesters, which scoop up the plant and surrounding earth. This is transported up an apron chain consisting of steel links several feet wide, which separates some of the dirt. The chain deposits into an area where further separation occurs. Different designs use different systems at this point. The most complex designs use vine choppers and shakers, along with a blower system to separate the potatoes from the plant. The result is then usually run past workers who continue to sort out plant material, stones, and rotten potatoes before the potatoes are continuously delivered to a wagon or truck. Further inspection and separation occurs when the potatoes are unloaded from the field vehicles and put into storage.
|
102 |
+
|
103 |
+
Immature potatoes may be sold as "creamer potatoes" and are particularly valued for taste. These are often harvested by the home gardener or farmer by "grabbling", i.e. pulling out the young tubers by hand while leaving the plant in place. A creamer potato is a variety of potato harvested before it matures to keep it small and tender. It is generally either a Yukon Gold potato or a red potato, called gold creamers[78] or red creamers respectively, and measures approximately 2.5 cm (1 in) in diameter.[79] The skin of creamer potatoes is waxy and high in moisture content, and the flesh contains a lower level of starch than other potatoes. Like potatoes in general, they can be prepared by boiling, baking, frying, and roasting.[79] Slightly older than creamer potatoes are "new potatoes", which are also prized for their taste and texture and often come from the same varieties.[80]
|
104 |
+
|
105 |
+
Potatoes are usually cured after harvest to improve skin-set. Skin-set is the process by which the skin of the potato becomes resistant to skinning damage. Potato tubers may be susceptible to skinning at harvest and suffer skinning damage during harvest and handling operations. Curing allows the skin to fully set and any wounds to heal. Wound-healing prevents infection and water-loss from the tubers during storage. Curing is normally done at relatively warm temperatures (10 to 16 °C or 50 to 60 °F) with high humidity and good gas-exchange if at all possible.[81]
|
106 |
+
|
107 |
+
Storage facilities need to be carefully designed to keep the potatoes alive and slow the natural process of decomposition, which involves the breakdown of starch. It is crucial that the storage area is dark, ventilated well and, for long-term storage, maintained at temperatures near 4 °C (39 °F). For short-term storage, temperatures of about 7 to 10 °C (45 to 50 °F) are preferred.[82]
|
108 |
+
|
109 |
+
On the other hand, temperatures below 4 °C (39 °F) convert the starch in potatoes into sugar, which alters their taste and cooking qualities and leads to higher acrylamide levels in the cooked product, especially in deep-fried dishes. The discovery of acrylamides in starchy foods in 2002 has led to international health concerns. They are believed to be probable carcinogens and their occurrence in cooked foods is being studied for potentially influencing health problems.[a][83]
|
110 |
+
|
111 |
+
Under optimum conditions in commercial warehouses, potatoes can be stored for up to 10–12 months.[82] The commercial storage and retrieval of potatoes involves several phases: first drying surface moisture; wound healing at 85% to 95% relative humidity and temperatures below 25 °C (77 °F); a staged cooling phase; a holding phase; and a reconditioning phase, during which the tubers are slowly warmed. Mechanical ventilation is used at various points during the process to prevent condensation and the accumulation of carbon dioxide.[82]
|
112 |
+
|
113 |
+
When stored in homes unrefrigerated, the shelf life is usually a few weeks.[citation needed]
|
114 |
+
|
115 |
+
If potatoes develop green areas or start to sprout, trimming or peeling those green-colored parts is inadequate to remove copresent toxins, and such potatoes are no longer edible.[84][85]
|
116 |
+
|
117 |
+
The world dedicated 18.6 million hectares (46 million acres) to potato cultivation in 2010; the world average yield was 17.4 tonnes per hectare (7.8 short tons per acre). The United States was the most productive country, with a nationwide average yield of 44.3 tonnes per hectare (19.8 short tons per acre).[86] United Kingdom was a close second.
|
118 |
+
|
119 |
+
New Zealand farmers have demonstrated some of the best commercial yields in the world, ranging between 60 and 80 tonnes per hectare, some reporting yields of 88 tonnes potatoes per hectare.[87][88][89]
|
120 |
+
|
121 |
+
There is a big gap among various countries between high and low yields, even with the same variety of potato. Average potato yields in developed economies ranges between 38–44 tonnes per hectare. China and India accounted for over a third of world's production in 2010, and had yields of 14.7 and 19.9 tonnes per hectare respectively.[86] The yield gap between farms in developing economies and developed economies represents an opportunity loss of over 400 million tonnes of potato, or an amount greater than 2010 world potato production. Potato crop yields are determined by factors such as the crop breed, seed age and quality, crop management practices and the plant environment. Improvements in one or more of these yield determinants, and a closure of the yield gap, can be a major boost to food supply and farmer incomes in the developing world.[90][91]
|
122 |
+
|
123 |
+
Global warming is predicted to have significant effects on global potato production.[92] Like many crops, potatoes are likely to be affected by changes in atmospheric carbon dioxide, temperature and precipitation, as well as interactions between these factors.[92] As well as affecting potatoes directly, climate change will also affect the distributions and populations of many potato diseases and pests.
|
124 |
+
|
125 |
+
Potatoes are prepared in many ways: skin-on or peeled, whole or cut up, with seasonings or without. The only requirement involves cooking to swell the starch granules. Most potato dishes are served hot but some are first cooked, then served cold, notably potato salad and potato chips (crisps). Common dishes are: mashed potatoes, which are first boiled (usually peeled), and then mashed with milk or yogurt and butter; whole baked potatoes; boiled or steamed potatoes; French-fried potatoes or chips; cut into cubes and roasted; scalloped, diced, or sliced and fried (home fries); grated into small thin strips and fried (hash browns); grated and formed into dumplings, Rösti or potato pancakes. Unlike many foods, potatoes can also be easily cooked in a microwave oven and still retain nearly all of their nutritional value, provided they are covered in ventilated plastic wrap to prevent moisture from escaping; this method produces a meal very similar to a steamed potato, while retaining the appearance of a conventionally baked potato. Potato chunks also commonly appear as a stew ingredient. Potatoes are boiled between 10 and 25[94] minutes, depending on size and type, to become soft.
|
126 |
+
|
127 |
+
Potatoes are also used for purposes other than eating by humans, for example:
|
128 |
+
|
129 |
+
|
130 |
+
|
131 |
+
Peruvian cuisine naturally contains the potato as a primary ingredient in many dishes, as around 3,000 varieties of this tuber are grown there.[105]
|
132 |
+
Some of the more notable dishes include boiled potato as a base for several dishes or with ají-based sauces like in Papa a la Huancaína or ocopa, diced potato for its use in soups like in cau cau, or in Carapulca with dried potato (papa seca). Smashed condimented potato is used in causa Limeña and papa rellena. French-fried potatoes are a typical ingredient in Peruvian stir-fries, including the classic dish lomo saltado.
|
133 |
+
|
134 |
+
Chuño is a freeze-dried potato product traditionally made by Quechua and Aymara communities of Peru and Bolivia,[106] and is known in various countries of South America, including Peru, Bolivia, Argentina, and Chile. In Chile's Chiloé Archipelago, potatoes are the main ingredient of many dishes, including milcaos, chapaleles, curanto and chochoca. In Ecuador, the potato, as well as being a staple with most dishes, is featured in the hearty locro de papas, a thick soup of potato, squash, and cheese.
|
135 |
+
|
136 |
+
In the UK, potatoes form part of the traditional staple, fish and chips. Roast potatoes are commonly served as part of a Sunday roast dinner and mashed potatoes form a major component of several other traditional dishes, such as shepherd's pie, bubble and squeak, and bangers and mash. New potatoes may be cooked with mint and are often served with butter.[107]
|
137 |
+
|
138 |
+
The Tattie scone is a popular Scottish dish containing potatoes. Colcannon is a traditional Irish food made with mashed potato, shredded kale or cabbage, and onion; champ is a similar dish. Boxty pancakes are eaten throughout Ireland, although associated especially with the North, and in Irish diaspora communities; they are traditionally made with grated potatoes, soaked to loosen the starch and mixed with flour, buttermilk and baking powder. A variant eaten and sold in Lancashire, especially Liverpool, is made with cooked and mashed potatoes.
|
139 |
+
|
140 |
+
Bryndzové halušky is the Slovak national dish, made of a batter of flour and finely grated potatoes that is boiled to form dumplings. These are then mixed with regionally varying ingredients.
|
141 |
+
|
142 |
+
In Germany, Northern and Eastern Europe (especially in Scandinavian countries), Finland, Poland, Russia, Belarus and Ukraine, newly harvested, early ripening varieties are considered a special delicacy. Boiled whole and served un-peeled with dill, these "new potatoes" are traditionally consumed with Baltic herring. Puddings made from grated potatoes (kugel, kugelis, and potato babka) are popular items of Ashkenazi, Lithuanian, and Belarusian cuisine.[108] German fries and various version of Potato salad are part of German cuisine. Bauernfrühstück (literally farmer's breakfast) is a warm German dish made from fried potatoes, eggs, ham and vegetables.
|
143 |
+
|
144 |
+
Cepelinai is Lithuanian national dish. They are a type of dumpling made from grated raw potatoes boiled in water and usually stuffed with minced meat, although sometimes dry cottage cheese (curd) or mushrooms are used instead.[109]
|
145 |
+
In Western Europe, especially in Belgium, sliced potatoes are fried to create frieten, the original French fried potatoes. Stamppot, a traditional Dutch meal, is based on mashed potatoes mixed with vegetables.
|
146 |
+
|
147 |
+
In France, the most notable potato dish is the Hachis Parmentier, named after Antoine-Augustin Parmentier, a French pharmacist, nutritionist, and agronomist who, in the late 18th century, was instrumental in the acceptance of the potato as an edible crop in the country. Pâté aux pommes de terre is a regional potato dish from the central Allier and Limousin regions. Gratin dauphinois, consisting of baked thinly sliced potatoes with cream or milk, and tartiflette, with Reblochon cheese, are also widespread.
|
148 |
+
|
149 |
+
In the north of Italy, in particular, in the Friuli region of the northeast, potatoes serve to make a type of pasta called gnocchi.[110] Similarly, cooked and mashed potatoes or potato flour can be used in the Knödel or dumpling eaten with or added to meat dishes all over central and Eastern Europe, but especially in Bavaria and Luxembourg. Potatoes form one of the main ingredients in many soups such as the vichyssoise and Albanian potato and cabbage soup. In western Norway, komle is popular.
|
150 |
+
|
151 |
+
A traditional Canary Islands dish is Canarian wrinkly potatoes or papas arrugadas. Tortilla de patatas (potato omelette) and patatas bravas (a dish of fried potatoes in a spicy tomato sauce) are near-universal constituent of Spanish tapas.
|
152 |
+
|
153 |
+
In the US, potatoes have become one of the most widely consumed crops and thus have a variety of preparation methods and condiments. French fries and often hash browns are commonly found in typical American fast-food burger "joints" and cafeterias. One popular favourite involves a baked potato with cheddar cheese (or sour cream and chives) on top, and in New England "smashed potatoes" (a chunkier variation on mashed potatoes, retaining the peel) have great popularity. Potato flakes are popular as an instant variety of mashed potatoes, which reconstitute into mashed potatoes by adding water, with butter or oil and salt to taste. A regional dish of Central New York, salt potatoes are bite-size new potatoes boiled in water saturated with salt then served with melted butter. At more formal dinners, a common practice includes taking small red potatoes, slicing them, and roasting them in an iron skillet. Among American Jews, the practice of eating latkes (fried potato pancakes) is common during the festival of Hanukkah.
|
154 |
+
|
155 |
+
A traditional Acadian dish from New Brunswick is known as poutine râpée. The Acadian poutine is a ball of grated and mashed potato, salted, sometimes filled with pork in the centre, and boiled. The result is a moist ball about the size of a baseball. It is commonly eaten with salt and pepper or brown sugar. It is believed to have originated from the German Klöße, prepared by early German settlers who lived among the Acadians. Poutine, by contrast, is a hearty serving of French fries, fresh cheese curds and hot gravy. Tracing its origins to Quebec in the 1950s, it has become a widespread and popular dish throughout Canada.
|
156 |
+
|
157 |
+
Potato grading for Idaho potatoes is performed in which No. 1 potatoes are the highest quality and No. 2 are rated as lower in quality due to their appearance (e.g. blemishes or bruises, pointy ends).[111] Potato density assessment can be performed by floating them in brines.[112] High-density potatoes are desirable in the production of dehydrated mashed potatoes, potato crisps and french fries.[112]
|
158 |
+
|
159 |
+
In South Asia, the potato is a very popular traditional staple. In India, the most popular potato dishes are aloo ki sabzi, batata vada, and samosa, which is spicy mashed potato mixed with a small amount of vegetable stuffed in conical dough, and deep fried. Potatoes are also a major ingredient as fast food items, such as aloo chaat, where they are deep fried and served with chutney. In Northern India, alu dum and alu paratha are a favourite part of the diet; the first is a spicy curry of boiled potato, the second is a type of stuffed chapati.
|
160 |
+
|
161 |
+
A dish called masala dosa from South India is very notable all over India. It is a thin pancake of rice and pulse batter rolled over spicy smashed potato and eaten with sambhar and chutney. Poori in south India in particular in Tamil Nadu is almost always taken with smashed potato masal. Other favourite dishes are alu tikki and pakoda items.
|
162 |
+
|
163 |
+
Vada pav is a popular vegetarian fast food dish in Mumbai and other regions in the Maharashtra in India.
|
164 |
+
|
165 |
+
Aloo posto (a curry with potatoes and poppy seeds) is immensely popular in East India, especially Bengal. Although potatoes are not native to India, it has become a vital part of food all over the country especially North Indian food preparations. In Tamil Nadu this tuber acquired a name based on its appearance 'urulai-k-kizhangu' (உருளைக் கிழங்கு) meaning cylindrical tuber.
|
166 |
+
|
167 |
+
The Aloo gosht, Potato and meat curry, is one of the popular dishes in South Asia, especially in Pakistan.
|
168 |
+
|
169 |
+
In East Asia, particularly Southeast Asia, rice is by far the predominant starch crop, with potatoes a secondary crop, especially in China and Japan. However, it is used in northern China where rice is not easily grown, with a popular dish being 青椒土豆丝 (qīng jiāo tǔ dòu sī), made with green pepper, vinegar and thin slices of potato. In the winter, roadside sellers in northern China will also sell roasted potatoes. It is also occasionally seen in Korean and Thai cuisines.[113]
|
170 |
+
|
171 |
+
The potato has been an essential crop in the Andes since the pre-Columbian Era. The Moche culture from Northern Peru made ceramics from earth, water, and fire. This pottery was a sacred substance, formed in significant shapes and used to represent important themes. Potatoes are represented anthropomorphically as well as naturally.[114]
|
172 |
+
|
173 |
+
During the late 19th century, numerous images of potato harvesting appeared in European art, including the works of Willem Witsen and Anton Mauve.[115]
|
174 |
+
|
175 |
+
Van Gogh's 1885 painting The Potato Eaters portrays a family eating potatoes. Van Gogh said he wanted to depict peasants as they really were. He deliberately chose coarse and ugly models, thinking that they would be natural and unspoiled in his finished work.[116]
|
176 |
+
|
177 |
+
Jean-François Millet's The Potato Harvest depicts peasants working in the plains between Barbizon and Chailly. It presents a theme representative of the peasants' struggle for survival. Millet's technique for this work incorporated paste-like pigments thickly applied over a coarsely textured canvas.
|
178 |
+
|
179 |
+
Invented in 1949, and marketed and sold commercially by Hasbro in 1952, Mr. Potato Head is an American toy that consists of a plastic potato and attachable plastic parts, such as ears and eyes, to make a face. It was the first toy ever advertised on television.[117]
|
180 |
+
|
en/4718.html.txt
ADDED
@@ -0,0 +1,198 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
An apple is an edible fruit produced by an apple tree (Malus domestica). Apple trees are cultivated worldwide and are the most widely grown species in the genus Malus. The tree originated in Central Asia, where its wild ancestor, Malus sieversii, is still found today. Apples have been grown for thousands of years in Asia and Europe and were brought to North America by European colonists. Apples have religious and mythological significance in many cultures, including Norse, Greek and European Christian tradition.
|
6 |
+
|
7 |
+
Apple trees are large if grown from seed. Generally, apple cultivars are propagated by grafting onto rootstocks, which control the size of the resulting tree. There are more than 7,500 known cultivars of apples, resulting in a range of desired characteristics. Different cultivars are bred for various tastes and use, including cooking, eating raw and cider production. Trees and fruit are prone to a number of fungal, bacterial and pest problems, which can be controlled by a number of organic and non-organic means. In 2010, the fruit's genome was sequenced as part of research on disease control and selective breeding in apple production.
|
8 |
+
|
9 |
+
Worldwide production of apples in 2018 was 86 million tonnes, with China accounting for nearly half of the total.[3]
|
10 |
+
|
11 |
+
The word "apple", formerly spelled æppel in Old English, is derived from the Proto-Germanic root *ap(a)laz, which could also mean fruit in general. This is ultimately derived from Proto-Indo-European *ab(e)l-, but the precise original meaning and the relationship between both words is uncertain.
|
12 |
+
|
13 |
+
As late as the 17th century, the word also functioned as a generic term for all fruit other than berries but including nuts—such as the 14th century Middle English word appel of paradis, meaning a banana.[4] This use is analogous to the French language use of pomme.
|
14 |
+
|
15 |
+
The apple is a deciduous tree, generally standing 2 to 4.5 m (6 to 15 ft) tall in cultivation and up to 9 m (30 ft) in the wild. When cultivated, the size, shape and branch density are determined by rootstock selection and trimming method. The leaves are alternately arranged dark green-colored simple ovals with serrated margins and slightly downy undersides.[5]
|
16 |
+
|
17 |
+
Blossoms are produced in spring simultaneously with the budding of the leaves and are produced on spurs and some long shoots. The 3 to 4 cm (1 to 1 1⁄2 in) flowers are white with a pink tinge that gradually fades, five petaled, with an inflorescence consisting of a cyme with 4–6 flowers. The central flower of the inflorescence is called the "king bloom"; it opens first and can develop a larger fruit.[5][6]
|
18 |
+
|
19 |
+
The fruit matures in late summer or autumn, and cultivars exist in a wide range of sizes. Commercial growers aim to produce an apple that is 7 to 8.5 cm (2 3⁄4 to 3 1⁄4 in) in diameter, due to market preference. Some consumers, especially those in Japan, prefer a larger apple, while apples below 5.5 cm (2 1⁄4 in) are generally used for making juice and have little fresh market value. The skin of ripe apples is generally red, yellow, green, pink, or russetted, though many bi- or tri-colored cultivars may be found.[7] The skin may also be wholly or partly russeted i.e. rough and brown. The skin is covered in a protective layer of epicuticular wax.[8] The exocarp (flesh) is generally pale yellowish-white,[7] though pink or yellow exocarps also occur.
|
20 |
+
|
21 |
+
The original wild ancestor of Malus domestica was Malus sieversii, found growing wild in the mountains of Central Asia in southern Kazakhstan, Kyrgyzstan, Tajikistan, and Xinjiang, China.[5][9] Cultivation of the species, most likely beginning on the forested flanks of the Tian Shan mountains, progressed over a long period of time and permitted secondary introgression of genes from other species into the open-pollinated seeds. Significant exchange with Malus sylvestris, the crabapple, resulted in current populations of apples being more related to crabapples than to the more morphologically similar progenitor Malus sieversii. In strains without recent admixture the contribution of the latter predominates.[10][11][12]
|
22 |
+
|
23 |
+
In 2010, an Italian-led consortium announced they had sequenced the complete genome of the apple in collaboration with horticultural genomicists at Washington State University,[13] using 'Golden Delicious'.[14] It had about 57,000 genes, the highest number of any plant genome studied to date[15] and more genes than the human genome (about 30,000).[16] This new understanding of the apple genome will help scientists identify genes and gene variants that contribute to resistance to disease and drought, and other desirable characteristics. Understanding the genes behind these characteristics will help scientists perform more knowledgeable selective breeding. The genome sequence also provided proof that Malus sieversii was the wild ancestor of the domestic apple—an issue that had been long-debated in the scientific community.[13]
|
24 |
+
|
25 |
+
Malus sieversii is recognized as a major progenitor species to the cultivated apple, and is morphologically similar. Due to the genetic variability in Central Asia, this region is generally considered the center of origin for apples.[17] The apple is thought to have been domesticated 4000-10000 years ago in the Tian Shan Mountains, and then to have travelled along the Silk Road to Europe, with hybridization and introgression of wild crabapples from Siberia (M. baccata (L.) Borkh.), Caucasus (M. orientalis Uglitz.), and Europe (M. sylvestris Mill.). Only the M. sieversii trees growing on the western side of Tian Shan Mountains contributed genetically to the domesticated apple, not the isolated population on the eastern side.[18]
|
26 |
+
|
27 |
+
Chinese soft apples, such as M. asiatica and M. prunifolia, have been cultivated as dessert apples for more than 2000 years in China. These are thought to be hybrids between M. baccata and M. sieversii in Kazakhstan.[18]
|
28 |
+
|
29 |
+
Among the traits selected for by human growers are size, fruit acidity, color, firmness, and soluble sugar. Unusually for domesticated fruits, the wild M. sieversii origin is only slightly smaller than the modern domesticated apple.[18]
|
30 |
+
|
31 |
+
At the Sammardenchia-Cueis site near Udine in Northeastern Italy, seeds from some form of apples have been found in material carbon dated to around 4000 BCE.[19] Genetic analysis has not yet been successfully used to determine whether such ancient apples were wild Malus Sylvestris or Malus Domesticus containing Malus sieversii ancestry.[20] It is generally also hard to distinguish in the archeological record between foraged wild apples and apple plantations.
|
32 |
+
|
33 |
+
There is indirect evidence of apple cultivation in the third millennium BCE in the Middle East. There was substantial apple production in the European classical antiquity, and grafting was certainly known then.[20] Grafting is an essential part of modern domesticated apple production, to be able to propagate the best cultivars; it is unclear when apple tree grafting was invented.[20]
|
34 |
+
|
35 |
+
Winter apples, picked in late autumn and stored just above freezing, have been an important food in Asia and Europe for millennia.[21]
|
36 |
+
Of the many Old World plants that the Spanish introduced to Chiloé Archipelago in the 16th century, apple trees became particularly well adapted.[22] Apples were introduced to North America by colonists in the 17th century,[5] and the first apple orchard on the North American continent was planted in Boston by Reverend William Blaxton in 1625.[23] The only apples native to North America are crab apples, which were once called "common apples".[24] Apple cultivars brought as seed from Europe were spread along Native American trade routes, as well as being cultivated on colonial farms. An 1845 United States apples nursery catalogue sold 350 of the "best" cultivars, showing the proliferation of new North American cultivars by the early 19th century.[24] In the 20th century, irrigation projects in Eastern Washington began and allowed the development of the multibillion-dollar fruit industry, of which the apple is the leading product.[5]
|
37 |
+
|
38 |
+
Until the 20th century, farmers stored apples in frostproof cellars during the winter for their own use or for sale. Improved transportation of fresh apples by train and road replaced the necessity for storage.[25][26] Controlled atmosphere facilities are used to keep apples fresh year-round. Controlled atmosphere facilities use high humidity, low oxygen, and controlled carbon dioxide levels to maintain fruit freshness. They were first used in the United States in the 1960s.[27]
|
39 |
+
|
40 |
+
In Norse mythology, the goddess Iðunn is portrayed in the Prose Edda (written in the 13th century by Snorri Sturluson) as providing apples to the gods that give them eternal youthfulness. English scholar H. R. Ellis Davidson links apples to religious practices in Germanic paganism, from which Norse paganism developed. She points out that buckets of apples were found in the Oseberg ship burial site in Norway, that fruit and nuts (Iðunn having been described as being transformed into a nut in Skáldskaparmál) have been found in the early graves of the Germanic peoples in England and elsewhere on the continent of Europe, which may have had a symbolic meaning, and that nuts are still a recognized symbol of fertility in southwest England.[28]
|
41 |
+
|
42 |
+
Davidson notes a connection between apples and the Vanir, a tribe of gods associated with fertility in Norse mythology, citing an instance of eleven "golden apples" being given to woo the beautiful Gerðr by Skírnir, who was acting as messenger for the major Vanir god Freyr in stanzas 19 and 20 of Skírnismál. Davidson also notes a further connection between fertility and apples in Norse mythology in chapter 2 of the Völsunga saga when the major goddess Frigg sends King Rerir an apple after he prays to Odin for a child, Frigg's messenger (in the guise of a crow) drops the apple in his lap as he sits atop a mound.[29] Rerir's wife's consumption of the apple results in a six-year pregnancy and the Caesarean section birth of their son—the hero Völsung.[30]
|
43 |
+
|
44 |
+
Further, Davidson points out the "strange" phrase "Apples of Hel" used in an 11th-century poem by the skald Thorbiorn Brúnarson. She states this may imply that the apple was thought of by Brúnarson as the food of the dead. Further, Davidson notes that the potentially Germanic goddess Nehalennia is sometimes depicted with apples and that parallels exist in early Irish stories. Davidson asserts that while cultivation of the apple in Northern Europe extends back to at least the time of the Roman Empire and came to Europe from the Near East, the native varieties of apple trees growing in Northern Europe are small and bitter. Davidson concludes that in the figure of Iðunn "we must have a dim reflection of an old symbol: that of the guardian goddess of the life-giving fruit of the other world."[28]
|
45 |
+
|
46 |
+
Apples appear in many religious traditions, often as a mystical or forbidden fruit. One of the problems identifying apples in religion, mythology and folktales is that the word "apple" was used as a generic term for all (foreign) fruit, other than berries, including nuts, as late as the 17th century.[31] For instance, in Greek mythology, the Greek hero Heracles, as a part of his Twelve Labours, was required to travel to the Garden of the Hesperides and pick the golden apples off the Tree of Life growing at its center.[32][33][34]
|
47 |
+
|
48 |
+
The Greek goddess of discord, Eris, became disgruntled after she was excluded from the wedding of Peleus and Thetis.[35] In retaliation, she tossed a golden apple inscribed Καλλίστη (Kalliste, sometimes transliterated Kallisti, "For the most beautiful one"), into the wedding party. Three goddesses claimed the apple: Hera, Athena, and Aphrodite. Paris of Troy was appointed to select the recipient. After being bribed by both Hera and Athena, Aphrodite tempted him with the most beautiful woman in the world, Helen of Sparta. He awarded the apple to Aphrodite, thus indirectly causing the Trojan War.[36]
|
49 |
+
|
50 |
+
The apple was thus considered, in ancient Greece, sacred to Aphrodite. To throw an apple at someone was to symbolically declare one's love; and similarly, to catch it was to symbolically show one's acceptance of that love. An epigram claiming authorship by Plato states:[37]
|
51 |
+
|
52 |
+
I throw the apple at you, and if you are willing to love me, take it and share your girlhood with me; but if your thoughts are what I pray they are not, even then take it, and consider how short-lived is beauty.
|
53 |
+
|
54 |
+
Atalanta, also of Greek mythology, raced all her suitors in an attempt to avoid marriage. She outran all but Hippomenes (also known as Melanion, a name possibly derived from melon the Greek word for both "apple" and fruit in general),[33] who defeated her by cunning, not speed. Hippomenes knew that he could not win in a fair race, so he used three golden apples (gifts of Aphrodite, the goddess of love) to distract Atalanta. It took all three apples and all of his speed, but Hippomenes was finally successful, winning the race and Atalanta's hand.[32]
|
55 |
+
|
56 |
+
Though the forbidden fruit of Eden in the Book of Genesis is not identified, popular Christian tradition has held that it was an apple that Eve coaxed Adam to share with her.[38] The origin of the popular identification with a fruit unknown in the Middle East in biblical times is found in confusion between the Latin words mālum (an apple) and mălum (an evil), each of which is normally written malum.[39] The tree of the forbidden fruit is called "the tree of the knowledge of good and evil" in Genesis 2:17, and the Latin for "good and evil" is bonum et malum.[40]
|
57 |
+
|
58 |
+
Renaissance painters may also have been influenced by the story of the golden apples in the Garden of Hesperides. As a result, in the story of Adam and Eve, the apple became a symbol for knowledge, immortality, temptation, the fall of man into sin, and sin itself. The larynx in the human throat has been called the "Adam's apple" because of a notion that it was caused by the forbidden fruit remaining in the throat of Adam.[38] The apple as symbol of sexual seduction has been used to imply human sexuality, possibly in an ironic vein.[38]
|
59 |
+
|
60 |
+
The proverb, "An apple a day keeps the doctor away", addressing the supposed health benefits of the fruit, has been traced to 19th-century Wales, where the original phrase was "Eat an apple on going to bed, and you'll keep the doctor from earning his bread".[41] In the 19th century and early 20th, the phrase evolved to "an apple a day, no doctor to pay" and "an apple a day sends the doctor away"; the phrasing now commonly used was first recorded in 1922.[42] Despite the proverb, there is no evidence that eating an apple daily has any significant health effects.[43]
|
61 |
+
|
62 |
+
There are more than 7,500 known cultivars of apples.[44] Cultivars vary in their yield and the ultimate size of the tree, even when grown on the same rootstock.[45] Different cultivars are available for temperate and subtropical climates. The UK's National Fruit Collection, which is the responsibility of the Department of Environment, Food, and Rural Affairs, includes a collection of over 2,000 cultivars of apple tree in Kent.[46] The University of Reading, which is responsible for developing the UK national collection database, provides access to search the national collection. The University of Reading's work is part of the European Cooperative Programme for Plant Genetic Resources of which there are 38 countries participating in the Malus/Pyrus work group.[47]
|
63 |
+
|
64 |
+
The UK's national fruit collection database contains much information on the characteristics and origin of many apples, including alternative names for what is essentially the same "genetic" apple cultivar. Most of these cultivars are bred for eating fresh (dessert apples), though some are cultivated specifically for cooking (cooking apples) or producing cider. Cider apples are typically too tart and astringent to eat fresh, but they give the beverage a rich flavor that dessert apples cannot.[48]
|
65 |
+
|
66 |
+
Commercially popular apple cultivars are soft but crisp. Other desirable qualities in modern commercial apple breeding are a colorful skin, absence of russeting, ease of shipping, lengthy storage ability, high yields, disease resistance, common apple shape, and developed flavor.[45] Modern apples are generally sweeter than older cultivars, as popular tastes in apples have varied over time. Most North Americans and Europeans favor sweet, subacid apples, but tart apples have a strong minority following.[49] Extremely sweet apples with barely any acid flavor are popular in Asia,[49] especially the Indian Subcontinent.[48]
|
67 |
+
|
68 |
+
Old cultivars are often oddly shaped, russeted, and grow in a variety of textures and colors. Some find them to have better flavor than modern cultivars,[50] but they may have other problems that make them commercially unviable—low yield, disease susceptibility, poor tolerance for storage or transport, or just being the "wrong" size. A few old cultivars are still produced on a large scale, but many have been preserved by home gardeners and farmers that sell directly to local markets. Many unusual and locally important cultivars with their own unique taste and appearance exist; apple conservation campaigns have sprung up around the world to preserve such local cultivars from extinction. In the United Kingdom, old cultivars such as 'Cox's Orange Pippin' and 'Egremont Russet' are still commercially important even though by modern standards they are low yielding and susceptible to disease.[5]
|
69 |
+
|
70 |
+
'Alice'
|
71 |
+
|
72 |
+
'Ambrosia'
|
73 |
+
|
74 |
+
'Ananasrenette'
|
75 |
+
|
76 |
+
'Arkansas Black'
|
77 |
+
|
78 |
+
'Aroma'
|
79 |
+
|
80 |
+
'Belle de Boskoop'
|
81 |
+
|
82 |
+
'Bramley'
|
83 |
+
|
84 |
+
'Cox's Orange Pippin'
|
85 |
+
|
86 |
+
'Cox Pomona'
|
87 |
+
|
88 |
+
'Discovery'
|
89 |
+
|
90 |
+
'Egremont Russet'
|
91 |
+
|
92 |
+
'Fuji'
|
93 |
+
|
94 |
+
'Gala'
|
95 |
+
|
96 |
+
'Golden Delicious'
|
97 |
+
|
98 |
+
'Goldrenette', ('Reinette')
|
99 |
+
|
100 |
+
'Granny Smith'
|
101 |
+
|
102 |
+
'Honeycrisp'
|
103 |
+
|
104 |
+
'James Grieve'
|
105 |
+
|
106 |
+
'Jonagold'
|
107 |
+
|
108 |
+
'Lobo'
|
109 |
+
|
110 |
+
'McIntosh'
|
111 |
+
|
112 |
+
'Pacific rose'
|
113 |
+
|
114 |
+
'Pink Lady'
|
115 |
+
|
116 |
+
'Red Delicious'
|
117 |
+
|
118 |
+
'Sampion' (Shampion)
|
119 |
+
|
120 |
+
'Stark Delicious'
|
121 |
+
|
122 |
+
'SugarBee'
|
123 |
+
|
124 |
+
'Summerred'
|
125 |
+
|
126 |
+
'Yellow Transparent'
|
127 |
+
|
128 |
+
Many apples grow readily from seeds. However, more than with most perennial fruits, apples must be propagated asexually to obtain the sweetness and other desirable characteristics of the parent. This is because seedling apples are an example of "extreme heterozygotes", in that rather than inheriting genes from their parents to create a new apple with parental characteristics, they are instead significantly different from their parents, perhaps to compete with the many pests.[51] Triploid cultivars have an additional reproductive barrier in that 3 sets of chromosomes cannot be divided evenly during meiosis, yielding unequal segregation of the chromosomes (aneuploids). Even in the case when a triploid plant can produce a seed (apples are an example), it occurs infrequently, and seedlings rarely survive.[52]
|
129 |
+
|
130 |
+
Because apples do not breed true when planted as seeds, although cuttings can take root and breed true, and may live for a century, grafting is usually used. The rootstock used for the bottom of the graft can be selected to produce trees of a large variety of sizes, as well as changing the winter hardiness, insect and disease resistance, and soil preference of the resulting tree. Dwarf rootstocks can be used to produce very small trees (less than 3.0 m or 10 ft high at maturity), which bear fruit many years earlier in their life cycle than full size trees, and are easier to harvest.[53] Dwarf rootstocks for apple trees can be traced as far back as 300 BCE, to the area of Persia and Asia Minor. Alexander the Great sent samples of dwarf apple trees to Aristotle's Lyceum. Dwarf rootstocks became common by the 15th century and later went through several cycles of popularity and decline throughout the world.[54] The majority of the rootstocks used today to control size in apples were developed in England in the early 1900s. The East Malling Research Station conducted extensive research into rootstocks, and today their rootstocks are given an "M" prefix to designate their origin. Rootstocks marked with an "MM" prefix are Malling-series cultivars later crossed with trees of 'Northern Spy' in Merton, England.[55]
|
131 |
+
|
132 |
+
Most new apple cultivars originate as seedlings, which either arise by chance or are bred by deliberately crossing cultivars with promising characteristics.[56] The words "seedling", "pippin", and "kernel" in the name of an apple cultivar suggest that it originated as a seedling. Apples can also form bud sports (mutations on a single branch). Some bud sports turn out to be improved strains of the parent cultivar. Some differ sufficiently from the parent tree to be considered new cultivars.[57]
|
133 |
+
|
134 |
+
Since the 1930s, the Excelsior Experiment Station at the University of Minnesota has introduced a steady progression of important apples that are widely grown, both commercially and by local orchardists, throughout Minnesota and Wisconsin. Its most important contributions have included 'Haralson' (which is the most widely cultivated apple in Minnesota), 'Wealthy', 'Honeygold', and 'Honeycrisp'.
|
135 |
+
|
136 |
+
Apples have been acclimatized in Ecuador at very high altitudes, where they can often, with the needed factors, provide crops twice per year because of constant temperate conditions year-round.[58]
|
137 |
+
|
138 |
+
Apples are self-incompatible; they must cross-pollinate to develop fruit. During the flowering each season, apple growers often utilize pollinators to carry pollen. Honey bees are most commonly used. Orchard mason bees are also used as supplemental pollinators in commercial orchards. Bumblebee queens are sometimes present in orchards, but not usually in sufficient number to be significant pollinators.[57][59]
|
139 |
+
|
140 |
+
There are four to seven pollination groups in apples, depending on climate:
|
141 |
+
|
142 |
+
One cultivar can be pollinated by a compatible cultivar from the same group or close (A with A, or A with B, but not A with C or D).[60]
|
143 |
+
|
144 |
+
Cultivars are sometimes classified by the day of peak bloom in the average 30-day blossom period, with pollenizers selected from cultivars within a 6-day overlap period.
|
145 |
+
|
146 |
+
Cultivars vary in their yield and the ultimate size of the tree, even when grown on the same rootstock. Some cultivars, if left unpruned, grow very large—letting them bear more fruit, but making harvesting more difficult. Depending on tree density (number of trees planted per unit surface area), mature trees typically bear 40–200 kg (90–440 lb) of apples each year, though productivity can be close to zero in poor years. Apples are harvested using three-point ladders that are designed to fit amongst the branches. Trees grafted on dwarfing rootstocks bear about 10–80 kg (20–180 lb) of fruit per year.[57]
|
147 |
+
|
148 |
+
Farms with apple orchards open them to the public so consumers can pick their own apples.[61]
|
149 |
+
|
150 |
+
Crops ripen at different times of the year according to the cultivar. Cultivar that yield their crop in the summer include 'Gala', 'Golden Supreme', 'McIntosh', 'Transparent', 'Primate', 'Sweet Bough', and 'Duchess'; fall producers include 'Fuji', 'Jonagold', 'Golden Delicious', 'Red Delicious', 'Chenango', 'Gravenstein', 'Wealthy', 'McIntosh', 'Snow', and 'Blenheim'; winter producers include 'Winesap', 'Granny Smith', 'King', 'Wagener', 'Swayzie', 'Greening', and 'Tolman Sweet'.[24]
|
151 |
+
|
152 |
+
Commercially, apples can be stored for some months in controlled atmosphere chambers to delay ethylene-induced ripening. Apples are commonly stored in chambers with higher concentrations of carbon dioxide and high air filtration. This prevents ethylene concentrations from rising to higher amounts and preventing ripening from occurring too quickly.
|
153 |
+
|
154 |
+
For home storage, most cultivars of apple can be held for approximately two weeks when kept at the coolest part of the refrigerator (i.e. below 5 °C). Some can be stored up to a year without significant degradation.[dubious – discuss][62][verification needed] Some varieties of apples (e.g. 'Granny Smith' and 'Fuji') have more than three times the storage life of others.[63]
|
155 |
+
|
156 |
+
Non-organic apples may be sprayed with 1-methylcyclopropene blocking the apples' ethylene receptors, temporarily preventing them from ripening.[64]
|
157 |
+
|
158 |
+
Apple trees are susceptible to a number of fungal and bacterial diseases and insect pests. Many commercial orchards pursue a program of chemical sprays to maintain high fruit quality, tree health, and high yields. These prohibit the use of synthetic pesticides, though some older pesticides are allowed. Organic methods include, for instance, introducing its natural predator to reduce the population of a particular pest.
|
159 |
+
|
160 |
+
A wide range of pests and diseases can affect the plant. Three of the more common diseases or pests are mildew, aphids, and apple scab.
|
161 |
+
|
162 |
+
Among the most serious disease problems are a bacterial disease called fireblight, and two fungal diseases: Gymnosporangium rust and black spot.[66] Other pests that affect apple trees include Codling moths and apple maggots. Young apple trees are also prone to mammal pests like mice and deer, which feed on the soft bark of the trees, especially in winter.[67] The larvae of the apple clearwing moth (red-belted clearwing) burrow through the bark and into the phloem of apple trees, potentially causing significant damage.[68]
|
163 |
+
|
164 |
+
World production of apples in 2018 was 86 million tonnes, with China producing 46% of the total (table).[3] Secondary producers were the United States and Poland.[3]
|
165 |
+
|
166 |
+
A raw apple is 86% water and 14% carbohydrates, with negligible content of fat and protein (table). A reference serving of a raw apple with skin weighing 100 grams provides 52 calories and a moderate content of dietary fiber.[69] Otherwise, there is low content of all micronutrients (table).
|
167 |
+
|
168 |
+
All parts of the fruit, including the skin, except for the seeds, are suitable for human consumption. The core, from stem to bottom, containing the seeds, is usually not eaten and is discarded.
|
169 |
+
|
170 |
+
Apples can be consumed various ways: juice, raw in salads, baked in pies, cooked into sauces and spreads like apple butter, and other baked dishes.[70]
|
171 |
+
|
172 |
+
Several techniques are used to preserve apples and apple products. Apples can be canned, dried or frozen.[70] Canned or frozen apples are eventually baked into pies or other cooked dishes. Apple juice or cider is also bottled. Apple juice is often concentrated and frozen.
|
173 |
+
|
174 |
+
Apples are often eaten raw. Cultivars bred for raw consumption are termed dessert or table apples.
|
175 |
+
|
176 |
+
Apples are an important ingredient in many desserts, such as apple pie, apple crumble, apple crisp and apple cake. When cooked, some apple cultivars easily form a puree known as apple sauce. Apples are also made into apple butter and apple jelly. They are often baked or stewed and are also (cooked) in some meat dishes. Dried apples can be eaten or reconstituted (soaked in water, alcohol or some other liquid).
|
177 |
+
|
178 |
+
Apples are milled or pressed to produce apple juice, which may be drunk unfiltered (called apple cider in North America), or filtered. Filtered juice is often concentrated and frozen, then reconstituted later and consumed. Apple juice can be fermented to make cider (called hard cider in North America), ciderkin, and vinegar. Through distillation, various alcoholic beverages can be produced, such as applejack, Calvados, and apfelwein.[71]
|
179 |
+
|
180 |
+
Organic apples are commonly produced in the United States.[72] Due to infestations by key insects and diseases, organic production is difficult in Europe.[73] The use of pesticides containing chemicals, such as sulfur, copper, microorganisms, viruses, clay powders, or plant extracts (pyrethrum, neem) has been approved by the EU Organic Standing Committee to improve organic yield and quality.[73] A light coating of kaolin, which forms a physical barrier to some pests, also may help prevent apple sun scalding.[57]
|
181 |
+
|
182 |
+
Apple skins and seeds contain various phytochemicals, particularly polyphenols which are under preliminary research for their potential health effects.[74]
|
183 |
+
|
184 |
+
The enzyme, polyphenol oxidase, causes browning in sliced or bruised apples, by catalyzing the oxidation of phenolic compounds to o-quinones, a browning factor.[75] Browning reduces apple taste, color, and food value. Arctic Apples, a non-browning group of apples introduced to the United States market in 2019, have been genetically modified to silence the expression of polyphenol oxidase, thereby delaying a browning effect and improving apple eating quality.[76][77] The US Food and Drug Administration in 2015, and Canadian Food Inspection Agency in 2017, determined that Arctic apples are as safe and nutritious as conventional apples.[78][79]
|
185 |
+
|
186 |
+
Apple seed oil is obtained by pressing apple seeds for manufacturing cosmetics.[80]
|
187 |
+
|
188 |
+
Preliminary research is investigating whether apple consumption may affect the risk of some types of cancer.[74][81]
|
189 |
+
|
190 |
+
One form of apple allergy, often found in northern Europe, is called birch-apple syndrome and is found in people who are also allergic to birch pollen.[82] Allergic reactions are triggered by a protein in apples that is similar to birch pollen, and people affected by this protein can also develop allergies to other fruits, nuts, and vegetables. Reactions, which entail oral allergy syndrome (OAS), generally involve itching and inflammation of the mouth and throat,[82] but in rare cases can also include life-threatening anaphylaxis.[83] This reaction only occurs when raw fruit is consumed—the allergen is neutralized in the cooking process. The variety of apple, maturity and storage conditions can change the amount of allergen present in individual fruits. Long storage times can increase the amount of proteins that cause birch-apple syndrome.[82]
|
191 |
+
|
192 |
+
In other areas, such as the Mediterranean, some individuals have adverse reactions to apples because of their similarity to peaches.[82] This form of apple allergy also includes OAS, but often has more severe symptoms, such as vomiting, abdominal pain and urticaria, and can be life-threatening. Individuals with this form of allergy can also develop reactions to other fruits and nuts. Cooking does not break down the protein causing this particular reaction, so affected individuals cannot eat raw or cooked apples. Freshly harvested, over-ripe fruits tend to have the highest levels of the protein that causes this reaction.[82]
|
193 |
+
|
194 |
+
Breeding efforts have yet to produce a hypoallergenic fruit suitable for either of the two forms of apple allergy.[82]
|
195 |
+
|
196 |
+
Apple seeds contain small amounts of amygdalin, a sugar and cyanide compound known as a cyanogenic glycoside. Ingesting small amounts of apple seeds causes no ill effects, but consumption of extremely large doses can cause adverse reactions. It may take several hours before the poison takes effect, as cyanogenic glycosides must be hydrolyzed before the cyanide ion is released.[84] The United States National Library of Medicine's Hazardous Substances Data Bank records no cases of amygdalin poisoning from consuming apple seeds.[85]
|
197 |
+
|
198 |
+
Books
|
en/4719.html.txt
ADDED
@@ -0,0 +1,180 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
The potato is a root vegetable native to the Americas, a starchy tuber of the plant Solanum tuberosum, and the plant itself is a perennial in the nightshade family, Solanaceae.[2]
|
6 |
+
|
7 |
+
Wild potato species, originating in modern-day Peru, can be found throughout the Americas, from the United States to southern Chile.[3] The potato was originally believed to have been domesticated by indigenous peoples of the Americas independently in multiple locations,[4] but later genetic testing of the wide variety of cultivars and wild species traced a single origin for potatoes. In the area of present-day southern Peru and extreme northwestern Bolivia, from a species in the Solanum brevicaule complex, potatoes were domesticated approximately 7,000–10,000 years ago.[5][6][7] In the Andes region of South America, where the species is indigenous, some close relatives of the potato are cultivated.
|
8 |
+
|
9 |
+
Potatoes were introduced to Europe from the Americas in the second half of the 16th century by the Spanish. Today they are a staple food in many parts of the world and an integral part of much of the world's food supply. As of 2014, potatoes were the world's fourth-largest food crop after maize (corn), wheat, and rice.[8]
|
10 |
+
|
11 |
+
Following millennia of selective breeding, there are now over 5,000 different types of potatoes.[6] Over 99% of presently cultivated potatoes worldwide descended from varieties that originated in the lowlands of south-central Chile.[9][10]
|
12 |
+
|
13 |
+
The importance of the potato as a food source and culinary ingredient varies by region and is still changing. It remains an essential crop in Europe, especially Northern and Eastern Europe, where per capita production is still the highest in the world, while the most rapid expansion in production over the past few decades has occurred in southern and eastern Asia, with China and India leading the world in overall production as of 2018.
|
14 |
+
|
15 |
+
Like the tomato, the potato is a nightshade in the genus Solanum, and the vegetative and fruiting parts of the potato contain the toxin solanine which is dangerous for human consumption. Normal potato tubers that have been grown and stored properly produce glycoalkaloids in amounts small enough to be negligible to human health, but if green sections of the plant (namely sprouts and skins) are exposed to light, the tuber can accumulate a high enough concentration of glycoalkaloids to affect human health.[11][12]
|
16 |
+
|
17 |
+
The English word potato comes from Spanish patata (the name used in Spain). The Royal Spanish Academy says the Spanish word is a hybrid of the Taíno batata ('sweet potato') and the Quechua papa ('potato').[13][14] The name originally referred to the sweet potato although the two plants are not closely related. The 16th-century English herbalist John Gerard referred to sweet potatoes as common potatoes, and used the terms bastard potatoes and Virginia potatoes for the species we now call potato.[15] In many of the chronicles detailing agriculture and plants, no distinction is made between the two.[16] Potatoes are occasionally referred to as Irish potatoes or white potatoes in the United States, to distinguish them from sweet potatoes.[15]
|
18 |
+
|
19 |
+
The name spud for a small potato comes from the digging of soil (or a hole) prior to the planting of potatoes. The word has an unknown origin and was originally (c. 1440) used as a term for a short knife or dagger, probably related to the Latin spad- a word root meaning "sword"; compare Spanish espada, English "spade", and spadroon. It subsequently transferred over to a variety of digging tools. Around 1845, the name transferred to the tuber itself, the first record of this usage being in New Zealand English.[17] The origin of the word spud has erroneously been attributed to an 18th-century activist group dedicated to keeping the potato out of Britain, calling itself The Society for the Prevention of Unwholesome Diet (S.P.U.D.). It was Mario Pei's 1949 The Story of Language that can be blamed for the word's false origin. Pei writes, "the potato, for its part, was in disrepute some centuries ago. Some Englishmen who did not fancy potatoes formed a Society for the Prevention of Unwholesome Diet. The initials of the main words in this title gave rise to spud." Like most other pre-20th century acronymic origins, this is false, and there is no evidence that a Society for the Prevention of Unwholesome Diet ever existed.[18][14]
|
20 |
+
|
21 |
+
Potato plants are herbaceous perennials that grow about 60 cm (24 in) high, depending on variety, with the leaves dying back after flowering, fruiting and tuber formation. They bear white, pink, red, blue, or purple flowers with yellow stamens. In general, the tubers of varieties with white flowers have white skins, while those of varieties with colored flowers tend to have pinkish skins.[19] Potatoes are mostly cross-pollinated by insects such as bumblebees, which carry pollen from other potato plants, though a substantial amount of self-fertilizing occurs as well. Tubers form in response to decreasing day length, although this tendency has been minimized in commercial varieties.[20]
|
22 |
+
|
23 |
+
After flowering, potato plants produce small green fruits that resemble green cherry tomatoes, each containing about 300 seeds. Like all parts of the plant except the tubers, the fruit contain the toxic alkaloid solanine and are therefore unsuitable for consumption. All new potato varieties are grown from seeds, also called "true potato seed", "TPS" or "botanical seed" to distinguish it from seed tubers. New varieties grown from seed can be propagated vegetatively by planting tubers, pieces of tubers cut to include at least one or two eyes, or cuttings, a practice used in greenhouses for the production of healthy seed tubers. Plants propagated from tubers are clones of the parent, whereas those propagated from seed produce a range of different varieties.
|
24 |
+
|
25 |
+
There are about 5,000 potato varieties worldwide. Three thousand of them are found in the Andes alone, mainly in Peru, Bolivia, Ecuador, Chile, and Colombia. They belong to eight or nine species, depending on the taxonomic school. Apart from the 5,000 cultivated varieties, there are about 200 wild species and subspecies, many of which can be cross-bred with cultivated varieties. Cross-breeding has been done repeatedly to transfer resistances to certain pests and diseases from the gene pool of wild species to the gene pool of cultivated potato species. Genetically modified varieties have met public resistance in the United States and in the European Union.[21][22]
|
26 |
+
|
27 |
+
The major species grown worldwide is Solanum tuberosum (a tetraploid with 48 chromosomes), and modern varieties of this species are the most widely cultivated. There are also four diploid species (with 24 chromosomes): S. stenotomum, S. phureja, S. goniocalyx, and S. ajanhuiri. There are two triploid species (with 36 chromosomes): S. chaucha and S. juzepczukii. There is one pentaploid cultivated species (with 60 chromosomes): S. curtilobum. There are two major subspecies of Solanum tuberosum: andigena, or Andean; and tuberosum, or Chilean.[23] The Andean potato is adapted to the short-day conditions prevalent in the mountainous equatorial and tropical regions where it originated; the Chilean potato, however, native to the Chiloé Archipelago, is adapted to the long-day conditions prevalent in the higher latitude region of southern Chile.[24]
|
28 |
+
|
29 |
+
The International Potato Center, based in Lima, Peru, holds an ISO-accredited collection of potato germplasm.[25] The international Potato Genome Sequencing Consortium announced in 2009 that they had achieved a draft sequence of the potato genome.[26] The potato genome contains 12 chromosomes and 860 million base pairs, making it a medium-sized plant genome.[27] More than 99 percent of all current varieties of potatoes currently grown are direct descendants of a subspecies that once grew in the lowlands of south-central Chile.[28] Nonetheless, genetic testing of the wide variety of cultivars and wild species affirms that all potato subspecies derive from a single origin in the area of present-day southern Peru and extreme Northwestern Bolivia (from a species in the Solanum brevicaule complex).[5][6][7] The wild Crop Wild Relatives Prebreeding project encourages the use of wild relatives in breeding programs. Enriching and preserving the gene bank collection to make potatoes adaptive to diverse environmental conditions is seen as a pressing issue due to climate change.[29]
|
30 |
+
|
31 |
+
Most modern potatoes grown in North America arrived through European settlement and not independently from the South American sources, although at least one wild potato species, Solanum fendleri, naturally ranges from Peru into Texas, where it is used in breeding for resistance to a nematode species that attacks cultivated potatoes. A secondary center of genetic variability of the potato is Mexico, where important wild species that have been used extensively in modern breeding are found, such as the hexaploid Solanum demissum, as a source of resistance to the devastating late blight disease.[30] Another relative native to this region, Solanum bulbocastanum, has been used to genetically engineer the potato to resist potato blight.[31]
|
32 |
+
|
33 |
+
Potatoes yield abundantly with little effort, and adapt readily to diverse climates as long as the climate is cool and moist enough for the plants to gather sufficient water from the soil to form the starchy tubers. Potatoes do not keep very well in storage and are vulnerable to moulds that feed on the stored tubers and quickly turn them rotten, whereas crops such as grain can be stored for several years with a low risk of rot. The food energy yield of potatoes – about 95 gigajoules per hectare (9.2 million kilocalories per acre) – is higher than that of maize (78 GJ/ha or 7.5×10^6 kcal/acre), rice (77 GJ/ha or 7.4×10^6 kcal/acre), wheat (31 GJ/ha or 3×10^6 kcal/acre), or soybeans (29 GJ/ha or 2.8×10^6 kcal/acre).[32]
|
34 |
+
|
35 |
+
There are close to 4,000 varieties of potato including common commercial varieties, each of which has specific agricultural or culinary attributes.[33] Around 80 varieties are commercially available in the UK.[34] In general, varieties are categorized into a few main groups based on common characteristics, such as russet potatoes (rough brown skin), red potatoes, white potatoes, yellow potatoes (also called Yukon potatoes) and purple potatoes.
|
36 |
+
|
37 |
+
For culinary purposes, varieties are often differentiated by their waxiness: floury or mealy baking potatoes have more starch (20–22%) than waxy boiling potatoes (16–18%). The distinction may also arise from variation in the comparative ratio of two different potato starch compounds: amylose and amylopectin. Amylose, a long-chain molecule, diffuses from the starch granule when cooked in water, and lends itself to dishes where the potato is mashed. Varieties that contain a slightly higher amylopectin content, which is a highly branched molecule, help the potato retain its shape after being boiled in water.[35] Potatoes that are good for making potato chips or potato crisps are sometimes called "chipping potatoes", which means they meet the basic requirements of similar varietal characteristics, being firm, fairly clean, and fairly well-shaped.[36]
|
38 |
+
|
39 |
+
The European Cultivated Potato Database (ECPD) is an online collaborative database of potato variety descriptions that is updated and maintained by the Scottish Agricultural Science Agency within the framework of the European Cooperative Programme for Crop Genetic Resources Networks (ECP/GR)—which is run by the International Plant Genetic Resources Institute (IPGRI).[37]
|
40 |
+
|
41 |
+
Dozens of potato cultivars have been selectively bred specifically for their skin or, more commonly, flesh color, including gold, red, and blue varieties[38] that contain varying amounts of phytochemicals, including carotenoids for gold/yellow or polyphenols for red or blue cultivars.[39] Carotenoid compounds include provitamin A alpha-carotene and beta-carotene, which are converted to the essential nutrient, vitamin A, during digestion. Anthocyanins mainly responsible for red or blue pigmentation in potato cultivars do not have nutritional significance, but are used for visual variety and consumer appeal.[40] Recently, as of 2010, potatoes have also been bioengineered specifically for these pigmentation traits.[41]
|
42 |
+
|
43 |
+
Genetic research has produced several genetically modified varieties. 'New Leaf', owned by Monsanto Company, incorporates genes from Bacillus thuringiensis, which confers resistance to the Colorado potato beetle; 'New Leaf Plus' and 'New Leaf Y', approved by US regulatory agencies during the 1990s, also include resistance to viruses. McDonald's, Burger King, Frito-Lay, and Procter & Gamble announced they would not use genetically modified potatoes, and Monsanto published its intent to discontinue the line in March 2001.[42]
|
44 |
+
|
45 |
+
Waxy potato varieties produce two main kinds of potato starch, amylose and amylopectin, the latter of which is most industrially useful. BASF developed the Amflora potato, which was modified to express antisense RNA to inactivate the gene for granule bound starch synthase, an enzyme which catalyzes the formation of amylose.[43] Amflora potatoes therefore produce starch consisting almost entirely of amylopectin, and are thus more useful for the starch industry. In 2010, the European Commission cleared the way for 'Amflora' to be grown in the European Union for industrial purposes only—not for food. Nevertheless, under EU rules, individual countries have the right to decide whether they will allow this potato to be grown on their territory. Commercial planting of 'Amflora' was expected in the Czech Republic and Germany in the spring of 2010, and Sweden and the Netherlands in subsequent years.[44] Another GM potato variety developed by BASF is 'Fortuna' which was made resistant to late blight by adding two resistance genes, blb1 and blb2, which originate from the Mexican wild potato Solanum bulbocastanum.[45][46] In October 2011 BASF requested cultivation and marketing approval as a feed and food from the EFSA. In 2012, GMO development in Europe was stopped by BASF.[47][48]
|
46 |
+
|
47 |
+
In November 2014, the USDA approved a genetically modified potato developed by J.R. Simplot Company, which contains genetic modifications that prevent bruising and produce less acrylamide when fried than conventional potatoes; the modifications do not cause new proteins to be made, but rather prevent proteins from being made via RNA interference.[49][50][51]
|
48 |
+
|
49 |
+
The potato was first domesticated in the region of modern-day southern Peru and northwestern Bolivia[5] between 8000 and 5000 BC.[6] It has since spread around the world and become a staple crop in many countries.
|
50 |
+
|
51 |
+
The earliest archaeologically verified potato tuber remains have been found at the coastal site of Ancon (central Peru), dating to 2500 BC.[52][53] The most widely cultivated variety, Solanum tuberosum tuberosum, is indigenous to the Chiloé Archipelago, and has been cultivated by the local indigenous people since before the Spanish conquest.[24][54]
|
52 |
+
|
53 |
+
According to conservative estimates, the introduction of the potato was responsible for a quarter of the growth in Old World population and urbanization between 1700 and 1900.[55] In the Altiplano, potatoes provided the principal energy source for the Inca civilization, its predecessors, and its Spanish successor. Following the Spanish conquest of the Inca Empire, the Spanish introduced the potato to Europe in the second half of the 16th century, part of the Columbian exchange. The staple was subsequently conveyed by European mariners to territories and ports throughout the world. The potato was slow to be adopted by European farmers, but soon enough it became an important food staple and field crop that played a major role in the European 19th century population boom.[7] However, lack of genetic diversity, due to the very limited number of varieties initially introduced, left the crop vulnerable to disease. In 1845, a plant disease known as late blight, caused by the fungus-like oomycete Phytophthora infestans, spread rapidly through the poorer communities of western Ireland as well as parts of the Scottish Highlands, resulting in the crop failures that led to the Great Irish Famine.[30] Thousands of varieties still persist in the Andes however, where over 100 cultivars might be found in a single valley, and a dozen or more might be maintained by a single agricultural household.[56]
|
54 |
+
|
55 |
+
In 2018, world production of potatoes was 368 million tonnes, led by China with 27% of the total (table). Other major producers were India, Russia, Ukraine and the United States. It remains an essential crop in Europe (especially northern and eastern Europe), where per capita production is still the highest in the world, but the most rapid expansion over the past few decades has occurred in southern and eastern Asia.[8][57]
|
56 |
+
|
57 |
+
A raw potato is 79% water, 17% carbohydrates (88% is starch), 2% protein, and contains negligible fat (see table). In a 100-gram (3 1⁄2-ounce) portion, raw potato provides 322 kilojoules (77 kilocalories) of food energy and is a rich source of vitamin B6 and vitamin C (23% and 24% of the Daily Value, respectively), with no other vitamins or minerals in significant amount (see table). The potato is rarely eaten raw because raw potato starch is poorly digested by humans.[58] When a potato is baked, its contents of vitamin B6 and vitamin C decline notably, while there is little significant change in the amount of other nutrients.[59]
|
58 |
+
|
59 |
+
Potatoes are often broadly classified as having a high glycemic index (GI) and so are often excluded from the diets of individuals trying to follow a low-GI diet. The GI of potatoes can vary considerably depending on the cultivar or cultivar category (such as "red", russet, "white", or King Edward), growing conditions and storage, preparation methods (by cooking method, whether it is eaten hot or cold, whether it is mashed or cubed or consumed whole), and accompanying foods consumed (especially the addition of various high-fat or high-protein toppings).[60] In particular, consuming reheated or cooled potatoes that were previously cooked may yield a lower GI effect.[60]
|
60 |
+
|
61 |
+
In the UK, potatoes are not considered by the National Health Service (NHS) as counting or contributing towards the recommended daily five portions of fruit and vegetables, the 5-A-Day program.[61]
|
62 |
+
|
63 |
+
This table shows the nutrient content of potatoes next to other major staple foods, each one measured in its respective raw state, even though staple foods are not commonly eaten raw and are usually sprouted or cooked before eating. In sprouted and cooked form, the relative nutritional and anti-nutritional contents of each of these grains (or other foods) may be different from the values in this table. Each nutrient (every row) has the highest number highlighted to show the staple food with the greatest amount in a 100-gram raw portion.
|
64 |
+
|
65 |
+
A raw yellow dent corn
|
66 |
+
B raw unenriched long-grain white rice
|
67 |
+
C raw hard red winter wheat
|
68 |
+
D raw potato with flesh and skin
|
69 |
+
E raw cassava
|
70 |
+
F raw green soybeans
|
71 |
+
G raw sweet potato
|
72 |
+
H raw sorghum
|
73 |
+
Y raw yam
|
74 |
+
Z raw plantains
|
75 |
+
/* unofficial
|
76 |
+
|
77 |
+
Potatoes contain toxic compounds known as glycoalkaloids, of which the most prevalent are solanine and chaconine. Solanine is found in other plants in the same family, Solanaceae, which includes such plants as deadly nightshade (Atropa belladonna), henbane (Hyoscyamus niger) and tobacco (Nicotiana spp.), as well as the food plants eggplant and tomato. These compounds, which protect the potato plant from its predators, are generally concentrated in its leaves, flowers, sprouts, and fruits (in contrast to the tubers).[63] In a summary of several studies, the glycoalkaloid content was highest in the flowers and sprouts and lowest in the tuber flesh. (The glycoalkaloid content was, in order from highest to lowest: flowers, sprouts, leaves, skin[clarification needed], roots, berries, peel [skin plus outer cortex of tuber flesh], stems, and tuber flesh).[11]
|
78 |
+
|
79 |
+
Exposure to light, physical damage, and age increase glycoalkaloid content within the tuber.[12] Cooking at high temperatures—over 170 °C (338 °F)—partly destroys these compounds. The concentration of glycoalkaloids in wild potatoes is sufficient to produce toxic effects in humans. Glycoalkaloid poisoning may cause headaches, diarrhea, cramps, and, in severe cases, coma and death. However, poisoning from cultivated potato varieties is very rare. Light exposure causes greening from chlorophyll synthesis, giving a visual clue as to which areas of the tuber may have become more toxic. However, this does not provide a definitive guide, as greening and glycoalkaloid accumulation can occur independently of each other.
|
80 |
+
|
81 |
+
Different potato varieties contain different levels of glycoalkaloids. The Lenape variety was released in 1967 but was withdrawn in 1970 as it contained high levels of glycoalkaloids.[64] Since then, breeders developing new varieties test for this, and sometimes have to discard an otherwise promising cultivar. Breeders try to keep glycoalkaloid levels below 200 mg/kg (200 ppmw). However, when these commercial varieties turn green, they can still approach solanine concentrations of 1000 mg/kg (1000 ppmw). In normal potatoes, analysis has shown solanine levels may be as little as 3.5% of the breeders' maximum, with 7–187 mg/kg being found.[65] While a normal potato tuber has 12–20 mg/kg of glycoalkaloid content, a green potato tuber contains 250–280 mg/kg and its skin has 1500–2200 mg/kg.[66]
|
82 |
+
|
83 |
+
Potatoes are generally grown from seed potatoes, tubers specifically grown to be free from disease and to provide consistent and healthy plants. To be disease free, the areas where seed potatoes are grown are selected with care. In the US, this restricts production of seed potatoes to only 15 states out of all 50 states where potatoes are grown.[67] These locations are selected for their cold, hard winters that kill pests and summers with long sunshine hours for optimum growth. In the UK, most seed potatoes originate in Scotland, in areas where westerly winds reduce aphid attack and the spread of potato virus pathogens.[68][failed verification]
|
84 |
+
|
85 |
+
Potato growth can be divided into five phases. During the first phase, sprouts emerge from the seed potatoes and root growth begins. During the second, photosynthesis begins as the plant develops leaves and branches above-ground and stolons develop from lower leaf axils on the below-ground stem. In the third phase the tips of the stolons swell forming new tubers and the shoots continue to grow and flowers typically develop soon after. Tuber bulking occurs during the fourth phase, when the plant begins investing the majority of its resources in its newly formed tubers. At this phase, several factors are critical to a good yield: optimal soil moisture and temperature, soil nutrient availability and balance, and resistance to pest attacks. The fifth phase is the maturation of the tubers: the plant canopy dies back, the tuber skins harden, and the sugars in the tubers convert to starches.[69][70]
|
86 |
+
|
87 |
+
New tubers may start growing at the surface of the soil. Since exposure to light leads to an undesirable greening of the skins and the development of solanine as a protection from the sun's rays, growers cover surface tubers. Commercial growers cover them by piling additional soil around the base of the plant as it grows (called "hilling" up, or in British English "earthing up"). An alternative method, used by home gardeners and smaller-scale growers, involves covering the growing area with organic mulches such as straw or plastic sheets.[71]
|
88 |
+
|
89 |
+
Correct potato husbandry can be an arduous task in some circumstances. Good ground preparation, harrowing, plowing, and rolling are always needed, along with a little grace from the weather and a good source of water.[72] Three successive plowings, with associated harrowing and rolling, are desirable before planting. Eliminating all root-weeds is desirable in potato cultivation. In general, the potatoes themselves are grown from the eyes of another potato and not from seed. Home gardeners often plant a piece of potato with two or three eyes in a hill of mounded soil. Commercial growers plant potatoes as a row crop using seed tubers, young plants or microtubers and may mound the entire row. Seed potato crops are rogued in some countries to eliminate diseased plants or those of a different variety from the seed crop.
|
90 |
+
|
91 |
+
Potatoes are sensitive to heavy frosts, which damage them in the ground. Even cold weather makes potatoes more susceptible to bruising and possibly later rotting, which can quickly ruin a large stored crop.
|
92 |
+
|
93 |
+
The historically significant Phytophthora infestans (late blight) remains an ongoing problem in Europe[30][73] and the United States.[74] Other potato diseases include Rhizoctonia, Sclerotinia, black leg, powdery mildew, powdery scab and leafroll virus.
|
94 |
+
|
95 |
+
Insects that commonly transmit potato diseases or damage the plants include the Colorado potato beetle, the potato tuber moth, the green peach aphid (Myzus persicae), the potato aphid, beet leafhoppers, thrips, and mites. The potato cyst nematode is a microscopic worm that thrives on the roots, thus causing the potato plants to wilt. Since its eggs can survive in the soil for several years, crop rotation is recommended.
|
96 |
+
|
97 |
+
During the crop year 2008, many of the certified organic potatoes produced in the United Kingdom and certified by the Soil Association as organic were sprayed with a copper pesticide[75] to control potato blight (Phytophthora infestans). According to the Soil Association, the total copper that can be applied to organic land is 6 kg/ha/year.[76]
|
98 |
+
|
99 |
+
According to an Environmental Working Group analysis of USDA and FDA pesticide residue tests performed from 2000 through 2008, 84% of the 2,216 tested potato samples contained detectable traces of at least one pesticide. A total of 36 unique pesticides were detected on potatoes over the 2,216 samples, though no individual sample contained more than 6 unique pesticide traces, and the average was 1.29 detectable unique pesticide traces per sample. The average quantity of all pesticide traces found in the 2,216 samples was 1.602 ppm. While this was a very low value of pesticide residue, it was the highest amongst the 50 vegetables analyzed.[77]
|
100 |
+
|
101 |
+
At harvest time, gardeners usually dig up potatoes with a long-handled, three-prong "grape" (or graip), i.e., a spading fork, or a potato hook, which is similar to the graip but with tines at a 90° angle to the handle. In larger plots, the plow is the fastest implement for unearthing potatoes. Commercial harvesting is typically done with large potato harvesters, which scoop up the plant and surrounding earth. This is transported up an apron chain consisting of steel links several feet wide, which separates some of the dirt. The chain deposits into an area where further separation occurs. Different designs use different systems at this point. The most complex designs use vine choppers and shakers, along with a blower system to separate the potatoes from the plant. The result is then usually run past workers who continue to sort out plant material, stones, and rotten potatoes before the potatoes are continuously delivered to a wagon or truck. Further inspection and separation occurs when the potatoes are unloaded from the field vehicles and put into storage.
|
102 |
+
|
103 |
+
Immature potatoes may be sold as "creamer potatoes" and are particularly valued for taste. These are often harvested by the home gardener or farmer by "grabbling", i.e. pulling out the young tubers by hand while leaving the plant in place. A creamer potato is a variety of potato harvested before it matures to keep it small and tender. It is generally either a Yukon Gold potato or a red potato, called gold creamers[78] or red creamers respectively, and measures approximately 2.5 cm (1 in) in diameter.[79] The skin of creamer potatoes is waxy and high in moisture content, and the flesh contains a lower level of starch than other potatoes. Like potatoes in general, they can be prepared by boiling, baking, frying, and roasting.[79] Slightly older than creamer potatoes are "new potatoes", which are also prized for their taste and texture and often come from the same varieties.[80]
|
104 |
+
|
105 |
+
Potatoes are usually cured after harvest to improve skin-set. Skin-set is the process by which the skin of the potato becomes resistant to skinning damage. Potato tubers may be susceptible to skinning at harvest and suffer skinning damage during harvest and handling operations. Curing allows the skin to fully set and any wounds to heal. Wound-healing prevents infection and water-loss from the tubers during storage. Curing is normally done at relatively warm temperatures (10 to 16 °C or 50 to 60 °F) with high humidity and good gas-exchange if at all possible.[81]
|
106 |
+
|
107 |
+
Storage facilities need to be carefully designed to keep the potatoes alive and slow the natural process of decomposition, which involves the breakdown of starch. It is crucial that the storage area is dark, ventilated well and, for long-term storage, maintained at temperatures near 4 °C (39 °F). For short-term storage, temperatures of about 7 to 10 °C (45 to 50 °F) are preferred.[82]
|
108 |
+
|
109 |
+
On the other hand, temperatures below 4 °C (39 °F) convert the starch in potatoes into sugar, which alters their taste and cooking qualities and leads to higher acrylamide levels in the cooked product, especially in deep-fried dishes. The discovery of acrylamides in starchy foods in 2002 has led to international health concerns. They are believed to be probable carcinogens and their occurrence in cooked foods is being studied for potentially influencing health problems.[a][83]
|
110 |
+
|
111 |
+
Under optimum conditions in commercial warehouses, potatoes can be stored for up to 10–12 months.[82] The commercial storage and retrieval of potatoes involves several phases: first drying surface moisture; wound healing at 85% to 95% relative humidity and temperatures below 25 °C (77 °F); a staged cooling phase; a holding phase; and a reconditioning phase, during which the tubers are slowly warmed. Mechanical ventilation is used at various points during the process to prevent condensation and the accumulation of carbon dioxide.[82]
|
112 |
+
|
113 |
+
When stored in homes unrefrigerated, the shelf life is usually a few weeks.[citation needed]
|
114 |
+
|
115 |
+
If potatoes develop green areas or start to sprout, trimming or peeling those green-colored parts is inadequate to remove copresent toxins, and such potatoes are no longer edible.[84][85]
|
116 |
+
|
117 |
+
The world dedicated 18.6 million hectares (46 million acres) to potato cultivation in 2010; the world average yield was 17.4 tonnes per hectare (7.8 short tons per acre). The United States was the most productive country, with a nationwide average yield of 44.3 tonnes per hectare (19.8 short tons per acre).[86] United Kingdom was a close second.
|
118 |
+
|
119 |
+
New Zealand farmers have demonstrated some of the best commercial yields in the world, ranging between 60 and 80 tonnes per hectare, some reporting yields of 88 tonnes potatoes per hectare.[87][88][89]
|
120 |
+
|
121 |
+
There is a big gap among various countries between high and low yields, even with the same variety of potato. Average potato yields in developed economies ranges between 38–44 tonnes per hectare. China and India accounted for over a third of world's production in 2010, and had yields of 14.7 and 19.9 tonnes per hectare respectively.[86] The yield gap between farms in developing economies and developed economies represents an opportunity loss of over 400 million tonnes of potato, or an amount greater than 2010 world potato production. Potato crop yields are determined by factors such as the crop breed, seed age and quality, crop management practices and the plant environment. Improvements in one or more of these yield determinants, and a closure of the yield gap, can be a major boost to food supply and farmer incomes in the developing world.[90][91]
|
122 |
+
|
123 |
+
Global warming is predicted to have significant effects on global potato production.[92] Like many crops, potatoes are likely to be affected by changes in atmospheric carbon dioxide, temperature and precipitation, as well as interactions between these factors.[92] As well as affecting potatoes directly, climate change will also affect the distributions and populations of many potato diseases and pests.
|
124 |
+
|
125 |
+
Potatoes are prepared in many ways: skin-on or peeled, whole or cut up, with seasonings or without. The only requirement involves cooking to swell the starch granules. Most potato dishes are served hot but some are first cooked, then served cold, notably potato salad and potato chips (crisps). Common dishes are: mashed potatoes, which are first boiled (usually peeled), and then mashed with milk or yogurt and butter; whole baked potatoes; boiled or steamed potatoes; French-fried potatoes or chips; cut into cubes and roasted; scalloped, diced, or sliced and fried (home fries); grated into small thin strips and fried (hash browns); grated and formed into dumplings, Rösti or potato pancakes. Unlike many foods, potatoes can also be easily cooked in a microwave oven and still retain nearly all of their nutritional value, provided they are covered in ventilated plastic wrap to prevent moisture from escaping; this method produces a meal very similar to a steamed potato, while retaining the appearance of a conventionally baked potato. Potato chunks also commonly appear as a stew ingredient. Potatoes are boiled between 10 and 25[94] minutes, depending on size and type, to become soft.
|
126 |
+
|
127 |
+
Potatoes are also used for purposes other than eating by humans, for example:
|
128 |
+
|
129 |
+
|
130 |
+
|
131 |
+
Peruvian cuisine naturally contains the potato as a primary ingredient in many dishes, as around 3,000 varieties of this tuber are grown there.[105]
|
132 |
+
Some of the more notable dishes include boiled potato as a base for several dishes or with ají-based sauces like in Papa a la Huancaína or ocopa, diced potato for its use in soups like in cau cau, or in Carapulca with dried potato (papa seca). Smashed condimented potato is used in causa Limeña and papa rellena. French-fried potatoes are a typical ingredient in Peruvian stir-fries, including the classic dish lomo saltado.
|
133 |
+
|
134 |
+
Chuño is a freeze-dried potato product traditionally made by Quechua and Aymara communities of Peru and Bolivia,[106] and is known in various countries of South America, including Peru, Bolivia, Argentina, and Chile. In Chile's Chiloé Archipelago, potatoes are the main ingredient of many dishes, including milcaos, chapaleles, curanto and chochoca. In Ecuador, the potato, as well as being a staple with most dishes, is featured in the hearty locro de papas, a thick soup of potato, squash, and cheese.
|
135 |
+
|
136 |
+
In the UK, potatoes form part of the traditional staple, fish and chips. Roast potatoes are commonly served as part of a Sunday roast dinner and mashed potatoes form a major component of several other traditional dishes, such as shepherd's pie, bubble and squeak, and bangers and mash. New potatoes may be cooked with mint and are often served with butter.[107]
|
137 |
+
|
138 |
+
The Tattie scone is a popular Scottish dish containing potatoes. Colcannon is a traditional Irish food made with mashed potato, shredded kale or cabbage, and onion; champ is a similar dish. Boxty pancakes are eaten throughout Ireland, although associated especially with the North, and in Irish diaspora communities; they are traditionally made with grated potatoes, soaked to loosen the starch and mixed with flour, buttermilk and baking powder. A variant eaten and sold in Lancashire, especially Liverpool, is made with cooked and mashed potatoes.
|
139 |
+
|
140 |
+
Bryndzové halušky is the Slovak national dish, made of a batter of flour and finely grated potatoes that is boiled to form dumplings. These are then mixed with regionally varying ingredients.
|
141 |
+
|
142 |
+
In Germany, Northern and Eastern Europe (especially in Scandinavian countries), Finland, Poland, Russia, Belarus and Ukraine, newly harvested, early ripening varieties are considered a special delicacy. Boiled whole and served un-peeled with dill, these "new potatoes" are traditionally consumed with Baltic herring. Puddings made from grated potatoes (kugel, kugelis, and potato babka) are popular items of Ashkenazi, Lithuanian, and Belarusian cuisine.[108] German fries and various version of Potato salad are part of German cuisine. Bauernfrühstück (literally farmer's breakfast) is a warm German dish made from fried potatoes, eggs, ham and vegetables.
|
143 |
+
|
144 |
+
Cepelinai is Lithuanian national dish. They are a type of dumpling made from grated raw potatoes boiled in water and usually stuffed with minced meat, although sometimes dry cottage cheese (curd) or mushrooms are used instead.[109]
|
145 |
+
In Western Europe, especially in Belgium, sliced potatoes are fried to create frieten, the original French fried potatoes. Stamppot, a traditional Dutch meal, is based on mashed potatoes mixed with vegetables.
|
146 |
+
|
147 |
+
In France, the most notable potato dish is the Hachis Parmentier, named after Antoine-Augustin Parmentier, a French pharmacist, nutritionist, and agronomist who, in the late 18th century, was instrumental in the acceptance of the potato as an edible crop in the country. Pâté aux pommes de terre is a regional potato dish from the central Allier and Limousin regions. Gratin dauphinois, consisting of baked thinly sliced potatoes with cream or milk, and tartiflette, with Reblochon cheese, are also widespread.
|
148 |
+
|
149 |
+
In the north of Italy, in particular, in the Friuli region of the northeast, potatoes serve to make a type of pasta called gnocchi.[110] Similarly, cooked and mashed potatoes or potato flour can be used in the Knödel or dumpling eaten with or added to meat dishes all over central and Eastern Europe, but especially in Bavaria and Luxembourg. Potatoes form one of the main ingredients in many soups such as the vichyssoise and Albanian potato and cabbage soup. In western Norway, komle is popular.
|
150 |
+
|
151 |
+
A traditional Canary Islands dish is Canarian wrinkly potatoes or papas arrugadas. Tortilla de patatas (potato omelette) and patatas bravas (a dish of fried potatoes in a spicy tomato sauce) are near-universal constituent of Spanish tapas.
|
152 |
+
|
153 |
+
In the US, potatoes have become one of the most widely consumed crops and thus have a variety of preparation methods and condiments. French fries and often hash browns are commonly found in typical American fast-food burger "joints" and cafeterias. One popular favourite involves a baked potato with cheddar cheese (or sour cream and chives) on top, and in New England "smashed potatoes" (a chunkier variation on mashed potatoes, retaining the peel) have great popularity. Potato flakes are popular as an instant variety of mashed potatoes, which reconstitute into mashed potatoes by adding water, with butter or oil and salt to taste. A regional dish of Central New York, salt potatoes are bite-size new potatoes boiled in water saturated with salt then served with melted butter. At more formal dinners, a common practice includes taking small red potatoes, slicing them, and roasting them in an iron skillet. Among American Jews, the practice of eating latkes (fried potato pancakes) is common during the festival of Hanukkah.
|
154 |
+
|
155 |
+
A traditional Acadian dish from New Brunswick is known as poutine râpée. The Acadian poutine is a ball of grated and mashed potato, salted, sometimes filled with pork in the centre, and boiled. The result is a moist ball about the size of a baseball. It is commonly eaten with salt and pepper or brown sugar. It is believed to have originated from the German Klöße, prepared by early German settlers who lived among the Acadians. Poutine, by contrast, is a hearty serving of French fries, fresh cheese curds and hot gravy. Tracing its origins to Quebec in the 1950s, it has become a widespread and popular dish throughout Canada.
|
156 |
+
|
157 |
+
Potato grading for Idaho potatoes is performed in which No. 1 potatoes are the highest quality and No. 2 are rated as lower in quality due to their appearance (e.g. blemishes or bruises, pointy ends).[111] Potato density assessment can be performed by floating them in brines.[112] High-density potatoes are desirable in the production of dehydrated mashed potatoes, potato crisps and french fries.[112]
|
158 |
+
|
159 |
+
In South Asia, the potato is a very popular traditional staple. In India, the most popular potato dishes are aloo ki sabzi, batata vada, and samosa, which is spicy mashed potato mixed with a small amount of vegetable stuffed in conical dough, and deep fried. Potatoes are also a major ingredient as fast food items, such as aloo chaat, where they are deep fried and served with chutney. In Northern India, alu dum and alu paratha are a favourite part of the diet; the first is a spicy curry of boiled potato, the second is a type of stuffed chapati.
|
160 |
+
|
161 |
+
A dish called masala dosa from South India is very notable all over India. It is a thin pancake of rice and pulse batter rolled over spicy smashed potato and eaten with sambhar and chutney. Poori in south India in particular in Tamil Nadu is almost always taken with smashed potato masal. Other favourite dishes are alu tikki and pakoda items.
|
162 |
+
|
163 |
+
Vada pav is a popular vegetarian fast food dish in Mumbai and other regions in the Maharashtra in India.
|
164 |
+
|
165 |
+
Aloo posto (a curry with potatoes and poppy seeds) is immensely popular in East India, especially Bengal. Although potatoes are not native to India, it has become a vital part of food all over the country especially North Indian food preparations. In Tamil Nadu this tuber acquired a name based on its appearance 'urulai-k-kizhangu' (உருளைக் கிழங்கு) meaning cylindrical tuber.
|
166 |
+
|
167 |
+
The Aloo gosht, Potato and meat curry, is one of the popular dishes in South Asia, especially in Pakistan.
|
168 |
+
|
169 |
+
In East Asia, particularly Southeast Asia, rice is by far the predominant starch crop, with potatoes a secondary crop, especially in China and Japan. However, it is used in northern China where rice is not easily grown, with a popular dish being 青椒土豆丝 (qīng jiāo tǔ dòu sī), made with green pepper, vinegar and thin slices of potato. In the winter, roadside sellers in northern China will also sell roasted potatoes. It is also occasionally seen in Korean and Thai cuisines.[113]
|
170 |
+
|
171 |
+
The potato has been an essential crop in the Andes since the pre-Columbian Era. The Moche culture from Northern Peru made ceramics from earth, water, and fire. This pottery was a sacred substance, formed in significant shapes and used to represent important themes. Potatoes are represented anthropomorphically as well as naturally.[114]
|
172 |
+
|
173 |
+
During the late 19th century, numerous images of potato harvesting appeared in European art, including the works of Willem Witsen and Anton Mauve.[115]
|
174 |
+
|
175 |
+
Van Gogh's 1885 painting The Potato Eaters portrays a family eating potatoes. Van Gogh said he wanted to depict peasants as they really were. He deliberately chose coarse and ugly models, thinking that they would be natural and unspoiled in his finished work.[116]
|
176 |
+
|
177 |
+
Jean-François Millet's The Potato Harvest depicts peasants working in the plains between Barbizon and Chailly. It presents a theme representative of the peasants' struggle for survival. Millet's technique for this work incorporated paste-like pigments thickly applied over a coarsely textured canvas.
|
178 |
+
|
179 |
+
Invented in 1949, and marketed and sold commercially by Hasbro in 1952, Mr. Potato Head is an American toy that consists of a plastic potato and attachable plastic parts, such as ears and eyes, to make a face. It was the first toy ever advertised on television.[117]
|
180 |
+
|
en/472.html.txt
ADDED
@@ -0,0 +1,164 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Photography is the art, application and practice of creating durable images by recording light or other electromagnetic radiation, either electronically by means of an image sensor, or chemically by means of a light-sensitive material such as photographic film. It is employed in many fields of science, manufacturing (e.g., photolithography), and business, as well as its more direct uses for art, film and video production, recreational purposes, hobby, and mass communication.[1]
|
4 |
+
|
5 |
+
Typically, a lens is used to focus the light reflected or emitted from objects into a real image on the light-sensitive surface inside a camera during a timed exposure. With an electronic image sensor, this produces an electrical charge at each pixel, which is electronically processed and stored in a digital image file for subsequent display or processing. The result with photographic emulsion is an invisible latent image, which is later chemically "developed" into a visible image, either negative or positive depending on the purpose of the photographic material and the method of processing. A negative image on film is traditionally used to photographically create a positive image on a paper base, known as a print, either by using an enlarger or by contact printing.
|
6 |
+
|
7 |
+
The word "photography" was created from the Greek roots φωτός (phōtos), genitive of φῶς (phōs), "light"[2] and γραφή (graphé) "representation by means of lines" or "drawing",[3] together meaning "drawing with light".[4]
|
8 |
+
|
9 |
+
Several people may have coined the same new term from these roots independently. Hercules Florence, a French painter and inventor living in Campinas, Brazil, used the French form of the word, photographie, in private notes which a Brazilian historian believes were written in 1834.[5] This claim is widely reported but is not yet largely recognized internationally. The first use of the word by the Franco-Brazilian inventor became widely known after the research of Boris Kossoy in 1980.[6]
|
10 |
+
|
11 |
+
The German newspaper Vossische Zeitung of 25 February 1839 contained an article entitled Photographie, discussing several priority claims – especially Henry Fox Talbot's – regarding Daguerre's claim of invention.[7] The article is the earliest known occurrence of the word in public print.[8] It was signed "J.M.", believed to have been Berlin astronomer Johann von Maedler.[9] The astronomer Sir John Herschel is also credited with coining the word, independent of Talbot, in 1839.[10]
|
12 |
+
|
13 |
+
The inventors Nicéphore Niépce, Henry Fox Talbot and Louis Daguerre seem not to have known or used the word "photography", but referred to their processes as "Heliography" (Niépce), "Photogenic Drawing"/"Talbotype"/"Calotype" (Talbot) and "Daguerreotype" (Daguerre).[9]
|
14 |
+
|
15 |
+
Photography is the result of combining several technical discoveries, relating to seeing an image and capturing the image. The discovery of the camera obscura ("dark chamber" in Latin) that provides an image of a scene dates back to ancient China. Greek mathematicians Aristotle and Euclid independently described a camera obscura in the 5th and 4th centuries BCE.[11][12] In the 6th century CE, Byzantine mathematician Anthemius of Tralles used a type of camera obscura in his experiments.[13]
|
16 |
+
|
17 |
+
The Arab physicist Ibn al-Haytham (Alhazen) (965–1040) also invented a camera obscura as well as the first true pinhole camera.[12][14][15] The invention of the camera has been traced back to the work of Ibn al-Haytham.[16] While the effects of a single light passing through a pinhole had been described earlier,[16] Ibn al-Haytham gave the first correct analysis of the camera obscura,[17] including the first geometrical and quantitative descriptions of the phenomenon,[18] and was the first to use a screen in a dark room so that an image from one side of a hole in the surface could be projected onto a screen on the other side.[19] He also first understood the relationship between the focal point and the pinhole,[20] and performed early experiments with afterimages, laying the foundations for the invention of photography in the 19th century.[15]
|
18 |
+
|
19 |
+
Leonardo da Vinci mentions natural camera obscura that are formed by dark caves on the edge of a sunlit valley. A hole in the cave wall will act as a pinhole camera and project a laterally reversed, upside down image on a piece of paper. Renaissance painters used the camera obscura which, in fact, gives the optical rendering in color that dominates Western Art. It is a box with a hole in it which allows light to go through and create an image onto the piece of paper.
|
20 |
+
|
21 |
+
The birth of photography was then concerned with inventing means to capture and keep the image produced by the camera obscura. Albertus Magnus (1193–1280) discovered silver nitrate,[21] and Georg Fabricius (1516–1571) discovered silver chloride,[22] and the techniques described in Ibn al-Haytham's Book of Optics are capable of producing primitive photographs using medieval materials.[23][24]
|
22 |
+
|
23 |
+
Daniele Barbaro described a diaphragm in 1566.[25] Wilhelm Homberg described how light darkened some chemicals (photochemical effect) in 1694.[26] The fiction book Giphantie, published in 1760, by French author Tiphaigne de la Roche, described what can be interpreted as photography.[25]
|
24 |
+
|
25 |
+
Around the year 1800, British inventor Thomas Wedgwood made the first known attempt to capture the image in a camera obscura by means of a light-sensitive substance. He used paper or white leather treated with silver nitrate. Although he succeeded in capturing the shadows of objects placed on the surface in direct sunlight, and even made shadow copies of paintings on glass, it was reported in 1802 that "the images formed by means of a camera obscura have been found too faint to produce, in any moderate time, an effect upon the nitrate of silver." The shadow images eventually darkened all over.[27]
|
26 |
+
|
27 |
+
The first permanent photoetching was an image produced in 1822 by the French inventor Nicéphore Niépce, but it was destroyed in a later attempt to make prints from it.[28] Niépce was successful again in 1825. In 1826 or 1827, he made the View from the Window at Le Gras, the earliest surviving photograph from nature (i.e., of the image of a real-world scene, as formed in a camera obscura by a lens).[29]
|
28 |
+
|
29 |
+
Because Niépce's camera photographs required an extremely long exposure (at least eight hours and probably several days), he sought to greatly improve his bitumen process or replace it with one that was more practical. In partnership with Louis Daguerre, he worked out post-exposure processing methods that produced visually superior results and replaced the bitumen with a more light-sensitive resin, but hours of exposure in the camera were still required. With an eye to eventual commercial exploitation, the partners opted for total secrecy.
|
30 |
+
|
31 |
+
Niépce died in 1833 and Daguerre then redirected the experiments toward the light-sensitive silver halides, which Niépce had abandoned many years earlier because of his inability to make the images he captured with them light-fast and permanent. Daguerre's efforts culminated in what would later be named the daguerreotype process. The essential elements—a silver-plated surface sensitized by iodine vapor, developed by mercury vapor, and "fixed" with hot saturated salt water—were in place in 1837. The required exposure time was measured in minutes instead of hours. Daguerre took the earliest confirmed photograph of a person in 1838 while capturing a view of a Paris street: unlike the other pedestrian and horse-drawn traffic on the busy boulevard, which appears deserted, one man having his boots polished stood sufficiently still throughout the several-minutes-long exposure to be visible. The existence of Daguerre's process was publicly announced, without details, on 7 January 1839. The news created an international sensation. France soon agreed to pay Daguerre a pension in exchange for the right to present his invention to the world as the gift of France, which occurred when complete working instructions were unveiled on 19 August 1839. In that same year, American photographer Robert Cornelius is credited with taking the earliest surviving photographic self-portrait.
|
32 |
+
|
33 |
+
In Brazil, Hercules Florence had apparently started working out a silver-salt-based paper process in 1832, later naming it Photographie.
|
34 |
+
|
35 |
+
Meanwhile, a British inventor, William Fox Talbot, had succeeded in making crude but reasonably light-fast silver images on paper as early as 1834 but had kept his work secret. After reading about Daguerre's invention in January 1839, Talbot published his hitherto secret method and set about improving on it. At first, like other pre-daguerreotype processes, Talbot's paper-based photography typically required hours-long exposures in the camera, but in 1840 he created the calotype process, which used the chemical development of a latent image to greatly reduce the exposure needed and compete with the daguerreotype. In both its original and calotype forms, Talbot's process, unlike Daguerre's, created a translucent negative which could be used to print multiple positive copies; this is the basis of most modern chemical photography up to the present day, as daguerreotypes could only be replicated by rephotographing them with a camera.[30] Talbot's famous tiny paper negative of the Oriel window in Lacock Abbey, one of a number of camera photographs he made in the summer of 1835, may be the oldest camera negative in existence.[31][32]
|
36 |
+
|
37 |
+
In France, Hippolyte Bayard invented his own process for producing direct positive paper prints and claimed to have invented photography earlier than Daguerre or Talbot.[33]
|
38 |
+
|
39 |
+
British chemist John Herschel made many contributions to the new field. He invented the cyanotype process, later familiar as the "blueprint". He was the first to use the terms "photography", "negative" and "positive". He had discovered in 1819 that sodium thiosulphate was a solvent of silver halides, and in 1839 he informed Talbot (and, indirectly, Daguerre) that it could be used to "fix" silver-halide-based photographs and make them completely light-fast. He made the first glass negative in late 1839.
|
40 |
+
|
41 |
+
In the March 1851 issue of The Chemist, Frederick Scott Archer published his wet plate collodion process. It became the most widely used photographic medium until the gelatin dry plate, introduced in the 1870s, eventually replaced it. There are three subsets to the collodion process; the Ambrotype (a positive image on glass), the Ferrotype or Tintype (a positive image on metal) and the glass negative, which was used to make positive prints on albumen or salted paper.
|
42 |
+
|
43 |
+
Many advances in photographic glass plates and printing were made during the rest of the 19th century. In 1891, Gabriel Lippmann introduced a process for making natural-color photographs based on the optical phenomenon of the interference of light waves. His scientifically elegant and important but ultimately impractical invention earned him the Nobel Prize in Physics in 1908.
|
44 |
+
|
45 |
+
Glass plates were the medium for most original camera photography from the late 1850s until the general introduction of flexible plastic films during the 1890s. Although the convenience of the film greatly popularized amateur photography, early films were somewhat more expensive and of markedly lower optical quality than their glass plate equivalents, and until the late 1910s they were not available in the large formats preferred by most professional photographers, so the new medium did not immediately or completely replace the old. Because of the superior dimensional stability of glass, the use of plates for some scientific applications, such as astrophotography, continued into the 1990s, and in the niche field of laser holography, it has persisted into the 2010s.
|
46 |
+
|
47 |
+
Hurter and Driffield began pioneering work on the light sensitivity of photographic emulsions in 1876. Their work enabled the first quantitative measure of film speed to be devised.
|
48 |
+
|
49 |
+
The first flexible photographic roll film was marketed by George Eastman, founder of Kodak in 1885, but this original "film" was actually a coating on a paper base. As part of the processing, the image-bearing layer was stripped from the paper and transferred to a hardened gelatin support. The first transparent plastic roll film followed in 1889. It was made from highly flammable nitrocellulose known as nitrate film.
|
50 |
+
|
51 |
+
Although cellulose acetate or "safety film" had been introduced by Kodak in 1908,[34] at first it found only a few special applications as an alternative to the hazardous nitrate film, which had the advantages of being considerably tougher, slightly more transparent, and cheaper. The changeover was not completed for X-ray films until 1933, and although safety film was always used for 16 mm and 8 mm home movies, nitrate film remained standard for theatrical 35 mm motion pictures until it was finally discontinued in 1951.
|
52 |
+
|
53 |
+
Films remained the dominant form of photography until the early 21st century when advances in digital photography drew consumers to digital formats.[35] Although modern photography is dominated by digital users, film continues to be used by enthusiasts and professional photographers. The distinctive "look" of film based photographs compared to digital images is likely due to a combination of factors, including: (1) differences in spectral and tonal sensitivity (S-shaped density-to-exposure (H&D curve) with film vs. linear response curve for digital CCD sensors)[36] (2) resolution and (3) continuity of tone.[37]
|
54 |
+
|
55 |
+
Originally, all photography was monochrome, or black-and-white. Even after color film was readily available, black-and-white photography continued to dominate for decades, due to its lower cost, chemical stability, and its "classic" photographic look. The tones and contrast between light and dark areas define black-and-white photography.[38] It is important to note that monochromatic pictures are not necessarily composed of pure blacks, whites, and intermediate shades of gray but can involve shades of one particular hue depending on the process. The cyanotype process, for example, produces an image composed of blue tones. The albumen print process first used more than 170 years ago, produces brownish tones.
|
56 |
+
|
57 |
+
Many photographers continue to produce some monochrome images, sometimes because of the established archival permanence of well-processed silver-halide-based materials. Some full-color digital images are processed using a variety of techniques to create black-and-white results, and some manufacturers produce digital cameras that exclusively shoot monochrome. Monochrome printing or electronic display can be used to salvage certain photographs taken in color which are unsatisfactory in their original form; sometimes when presented as black-and-white or single-color-toned images they are found to be more effective. Although color photography has long predominated, monochrome images are still produced, mostly for artistic reasons. Almost all digital cameras have an option to shoot in monochrome, and almost all image editing software can combine or selectively discard RGB color channels to produce a monochrome image from one shot in color.
|
58 |
+
|
59 |
+
Color photography was explored beginning in the 1840s. Early experiments in color required extremely long exposures (hours or days for camera images) and could not "fix" the photograph to prevent the color from quickly fading when exposed to white light.
|
60 |
+
|
61 |
+
The first permanent color photograph was taken in 1861 using the three-color-separation principle first published by Scottish physicist James Clerk Maxwell in 1855.[39][40] The foundation of virtually all practical color processes, Maxwell's idea was to take three separate black-and-white photographs through red, green and blue filters.[39][40] This provides the photographer with the three basic channels required to recreate a color image. Transparent prints of the images could be projected through similar color filters and superimposed on the projection screen, an additive method of color reproduction. A color print on paper could be produced by superimposing carbon prints of the three images made in their complementary colors, a subtractive method of color reproduction pioneered by Louis Ducos du Hauron in the late 1860s.
|
62 |
+
|
63 |
+
Russian photographer Sergei Mikhailovich Prokudin-Gorskii made extensive use of this color separation technique, employing a special camera which successively exposed the three color-filtered images on different parts of an oblong plate. Because his exposures were not simultaneous, unsteady subjects exhibited color "fringes" or, if rapidly moving through the scene, appeared as brightly colored ghosts in the resulting projected or printed images.
|
64 |
+
|
65 |
+
Implementation of color photography was hindered by the limited sensitivity of early photographic materials, which were mostly sensitive to blue, only slightly sensitive to green, and virtually insensitive to red. The discovery of dye sensitization by photochemist Hermann Vogel in 1873 suddenly made it possible to add sensitivity to green, yellow and even red. Improved color sensitizers and ongoing improvements in the overall sensitivity of emulsions steadily reduced the once-prohibitive long exposure times required for color, bringing it ever closer to commercial viability.
|
66 |
+
|
67 |
+
Autochrome, the first commercially successful color process, was introduced by the Lumière brothers in 1907. Autochrome plates incorporated a mosaic color filter layer made of dyed grains of potato starch, which allowed the three color components to be recorded as adjacent microscopic image fragments. After an Autochrome plate was reversal processed to produce a positive transparency, the starch grains served to illuminate each fragment with the correct color and the tiny colored points blended together in the eye, synthesizing the color of the subject by the additive method. Autochrome plates were one of several varieties of additive color screen plates and films marketed between the 1890s and the 1950s.
|
68 |
+
|
69 |
+
Kodachrome, the first modern "integral tripack" (or "monopack") color film, was introduced by Kodak in 1935. It captured the three color components in a multi-layer emulsion. One layer was sensitized to record the red-dominated part of the spectrum, another layer recorded only the green part and a third recorded only the blue. Without special film processing, the result would simply be three superimposed black-and-white images, but complementary cyan, magenta, and yellow dye images were created in those layers by adding color couplers during a complex processing procedure.
|
70 |
+
|
71 |
+
Agfa's similarly structured Agfacolor Neu was introduced in 1936. Unlike Kodachrome, the color couplers in Agfacolor Neu were incorporated into the emulsion layers during manufacture, which greatly simplified the processing. Currently, available color films still employ a multi-layer emulsion and the same principles, most closely resembling Agfa's product.
|
72 |
+
|
73 |
+
Instant color film, used in a special camera which yielded a unique finished color print only a minute or two after the exposure, was introduced by Polaroid in 1963.
|
74 |
+
|
75 |
+
Color photography may form images as positive transparencies, which can be used in a slide projector, or as color negatives intended for use in creating positive color enlargements on specially coated paper. The latter is now the most common form of film (non-digital) color photography owing to the introduction of automated photo printing equipment. After a transition period centered around 1995–2005, color film was relegated to a niche market by inexpensive multi-megapixel digital cameras. Film continues to be the preference of some photographers because of its distinctive "look".
|
76 |
+
|
77 |
+
In 1981, Sony unveiled the first consumer camera to use a charge-coupled device for imaging, eliminating the need for film: the Sony Mavica. While the Mavica saved images to disk, the images were displayed on television, and the camera was not fully digital.
|
78 |
+
|
79 |
+
The first digital camera to both record and save images in a digital format was the Fujix DS-1P created by Fujfilm in 1988.[41] https://www.fujifilm.com/innovation/achievements/ds-1p/
|
80 |
+
|
81 |
+
In 1991, Kodak unveiled the DCS 100, the first commercially available digital single lens reflex camera. Although its high cost precluded uses other than photojournalism and professional photography, commercial digital photography was born.
|
82 |
+
|
83 |
+
Digital imaging uses an electronic image sensor to record the image as a set of electronic data rather than as chemical changes on film.[42] An important difference between digital and chemical photography is that chemical photography resists photo manipulation because it involves film and photographic paper, while digital imaging is a highly manipulative medium. This difference allows for a degree of image post-processing that is comparatively difficult in film-based photography and permits different communicative potentials and applications.
|
84 |
+
|
85 |
+
Digital photography dominates the 21st century. More than 99% of photographs taken around the world are through digital cameras, increasingly through smartphones.
|
86 |
+
|
87 |
+
Synthesis photography is part of computer-generated imagery (CGI) where the shooting process is modeled on real photography. The CGI, creating digital copies of real universe, requires a visual representation process of these universes. Synthesis photography is the application of analog and digital photography in digital space. With the characteristics of the real photography but not being constrained by the physical limits of real world, synthesis photography allows artists to move into areas beyond the grasp of real photography.[43]
|
88 |
+
|
89 |
+
A large variety of photographic techniques and media are used in the process of capturing images for photography. These include the camera; stereoscopy; dualphotography; full-spectrum, ultraviolet and infrared media; light field photography; and other imaging techniques.
|
90 |
+
|
91 |
+
The camera is the image-forming device, and a photographic plate, photographic film or a silicon electronic image sensor is the capture medium. The respective recording medium can be the plate or film itself, or a digital magnetic or electronic memory.[44]
|
92 |
+
|
93 |
+
Photographers control the camera and lens to "expose" the light recording material to the required amount of light to form a "latent image" (on plate or film) or RAW file (in digital cameras) which, after appropriate processing, is converted to a usable image. Digital cameras use an electronic image sensor based on light-sensitive electronics such as charge-coupled device (CCD) or complementary metal-oxide-semiconductor (CMOS) technology. The resulting digital image is stored electronically, but can be reproduced on a paper.
|
94 |
+
|
95 |
+
The camera (or 'camera obscura') is a dark room or chamber from which, as far as possible, all light is excluded except the light that forms the image. It was discovered and used in the 16th century by painters. The subject being photographed, however, must be illuminated. Cameras can range from small to very large, a whole room that is kept dark while the object to be photographed is in another room where it is properly illuminated. This was common for reproduction photography of flat copy when large film negatives were used (see Process camera).
|
96 |
+
|
97 |
+
As soon as photographic materials became "fast" (sensitive) enough for taking candid or surreptitious pictures, small "detective" cameras were made, some actually disguised as a book or handbag or pocket watch (the Ticka camera) or even worn hidden behind an Ascot necktie with a tie pin that was really the lens.
|
98 |
+
|
99 |
+
The movie camera is a type of photographic camera which takes a rapid sequence of photographs on recording medium. In contrast to a still camera, which captures a single snapshot at a time, the movie camera takes a series of images, each called a "frame". This is accomplished through an intermittent mechanism. The frames are later played back in a movie projector at a specific speed, called the "frame rate" (number of frames per second). While viewing, a person's eyes and brain merge the separate pictures to create the illusion of motion.[45]
|
100 |
+
|
101 |
+
Photographs, both monochrome and color, can be captured and displayed through two side-by-side images that emulate human stereoscopic vision. Stereoscopic photography was the first that captured figures in motion.[46] While known colloquially as "3-D" photography, the more accurate term is stereoscopy. Such cameras have long been realized by using film and more recently in digital electronic methods (including cell phone cameras).
|
102 |
+
|
103 |
+
Dualphotography consists of photographing a scene from both sides of a photographic device at once (e.g. camera for back-to-back dualphotography, or two networked cameras for portal-plane dualphotography). The dualphoto apparatus can be used to simultaneously capture both the subject and the photographer, or both sides of a geographical place at once, thus adding a supplementary narrative layer to that of a single image.[47]
|
104 |
+
|
105 |
+
Ultraviolet and infrared films have been available for many decades and employed in a variety of photographic avenues since the 1960s. New technological trends in digital photography have opened a new direction in full spectrum photography, where careful filtering choices across the ultraviolet, visible and infrared lead to new artistic visions.
|
106 |
+
|
107 |
+
Modified digital cameras can detect some ultraviolet, all of the visible and much of the near infrared spectrum, as most digital imaging sensors are sensitive from about 350 nm to 1000 nm. An off-the-shelf digital camera contains an infrared hot mirror filter that blocks most of the infrared and a bit of the ultraviolet that would otherwise be detected by the sensor, narrowing the accepted range from about 400 nm to 700 nm.[48]
|
108 |
+
|
109 |
+
Replacing a hot mirror or infrared blocking filter with an infrared pass or a wide spectrally transmitting filter allows the camera to detect the wider spectrum light at greater sensitivity. Without the hot-mirror, the red, green and blue (or cyan, yellow and magenta) colored micro-filters placed over the sensor elements pass varying amounts of ultraviolet (blue window) and infrared (primarily red and somewhat lesser the green and blue micro-filters).
|
110 |
+
|
111 |
+
Uses of full spectrum photography are for fine art photography, geology, forensics and law enforcement.
|
112 |
+
|
113 |
+
Digital methods of image capture and display processing have enabled the new technology of "light field photography" (also known as synthetic aperture photography). This process allows focusing at various depths of field to be selected after the photograph has been captured.[49] As explained by Michael Faraday in 1846, the "light field" is understood as 5-dimensional, with each point in 3-D space having attributes of two more angles that define the direction of each ray passing through that point.
|
114 |
+
|
115 |
+
These additional vector attributes can be captured optically through the use of microlenses at each pixel point within the 2-dimensional image sensor. Every pixel of the final image is actually a selection from each sub-array located under each microlens, as identified by a post-image capture focus algorithm.
|
116 |
+
|
117 |
+
Besides the camera, other methods of forming images with light are available. For instance, a photocopy or xerography machine forms permanent images but uses the transfer of static electrical charges rather than photographic medium, hence the term electrophotography. Photograms are images produced by the shadows of objects cast on the photographic paper, without the use of a camera. Objects can also be placed directly on the glass of an image scanner to produce digital pictures.
|
118 |
+
|
119 |
+
An amateur photographer is one who practices photography as a hobby/passion and not necessarily for profit. The quality of some amateur work is comparable to that of many professionals and may be highly specialized or eclectic in choice of subjects. Amateur photography is often pre-eminent in photographic subjects which have little prospect of commercial use or reward. Amateur photography grew during the late 19th century due to the popularization of the hand-held camera.[50] Nowadays it has spread widely through social media and is carried out throughout different platforms and equipment, switching to the use of cell phone. Good pictures can now be taken with a cell phone which is a key tool for making photography more accessible to everyone.[51]
|
120 |
+
|
121 |
+
Commercial photography is probably best defined as any photography for which the photographer is paid for images rather than works of art. In this light, money could be paid for the subject of the photograph or the photograph itself. Wholesale, retail, and professional uses of photography would fall under this definition. The commercial photographic world could include:
|
122 |
+
|
123 |
+
During the 20th century, both fine art photography and documentary photography became accepted by the English-speaking art world and the gallery system. In the United States, a handful of photographers, including Alfred Stieglitz, Edward Steichen, John Szarkowski, F. Holland Day, and Edward Weston, spent their lives advocating for photography as a fine art.
|
124 |
+
At first, fine art photographers tried to imitate painting styles. This movement is called Pictorialism, often using soft focus for a dreamy, 'romantic' look. In reaction to that, Weston, Ansel Adams, and others formed the Group f/64 to advocate 'straight photography', the photograph as a (sharply focused) thing in itself and not an imitation of something else.
|
125 |
+
|
126 |
+
The aesthetics of photography is a matter that continues to be discussed regularly, especially in artistic circles. Many artists argued that photography was the mechanical reproduction of an image. If photography is authentically art, then photography in the context of art would need redefinition, such as determining what component of a photograph makes it beautiful to the viewer. The controversy began with the earliest images "written with light"; Nicéphore Niépce, Louis Daguerre, and others among the very earliest photographers were met with acclaim, but some questioned if their work met the definitions and purposes of art.
|
127 |
+
|
128 |
+
Clive Bell in his classic essay Art states that only "significant form" can distinguish art from what is not art.
|
129 |
+
|
130 |
+
There must be some one quality without which a work of art cannot exist; possessing which, in the least degree, no work is altogether worthless. What is this quality? What quality is shared by all objects that provoke our aesthetic emotions? What quality is common to Sta. Sophia and the windows at Chartres, Mexican sculpture, a Persian bowl, Chinese carpets, Giotto's frescoes at Padua, and the masterpieces of Poussin, Piero della Francesca, and Cezanne? Only one answer seems possible – significant form. In each, lines and colors combined in a particular way, certain forms and relations of forms, stir our aesthetic emotions.[52]
|
131 |
+
|
132 |
+
On 7 February 2007, Sotheby's London sold the 2001 photograph 99 Cent II Diptychon for an unprecedented $3,346,456 to an anonymous bidder, making it the most expensive at the time.[53]
|
133 |
+
|
134 |
+
Conceptual photography turns a concept or idea into a photograph. Even though what is depicted in the photographs are real objects, the subject is strictly abstract.
|
135 |
+
|
136 |
+
Photojournalism is a particular form of photography (the collecting, editing, and presenting of news material for publication or broadcast) that employs images in order to tell a news story. It is now usually understood to refer only to still images, but in some cases the term also refers to video used in broadcast journalism. Photojournalism is distinguished from other close branches of photography (e.g., documentary photography, social documentary photography, street photography or celebrity photography) by complying with a rigid ethical framework which demands that the work be both honest and impartial whilst telling the story in strictly journalistic terms. Photojournalists create pictures that contribute to the news media, and help communities connect with one other. Photojournalists must be well informed and knowledgeable about events happening right outside their door. They deliver news in a creative format that is not only informative, but also entertaining.
|
137 |
+
|
138 |
+
The camera has a long and distinguished history as a means of recording scientific phenomena from the first use by Daguerre and Fox-Talbot, such as astronomical events (eclipses for example), small creatures and plants when the camera was attached to the eyepiece of microscopes (in photomicroscopy) and for macro photography of larger specimens. The camera also proved useful in recording crime scenes and the scenes of accidents, such as the Wootton bridge collapse in 1861. The methods used in analysing photographs for use in legal cases are collectively known as forensic photography. Crime scene photos are taken from three vantage point. The vantage points are overview, mid-range, and close-up.[54]
|
139 |
+
|
140 |
+
In 1845 Francis Ronalds, the Honorary Director of the Kew Observatory, invented the first successful camera to make continuous recordings of meteorological and geomagnetic parameters. Different machines produced 12- or 24- hour photographic traces of the minute-by-minute variations of atmospheric pressure, temperature, humidity, atmospheric electricity, and the three components of geomagnetic forces. The cameras were supplied to numerous observatories around the world and some remained in use until well into the 20th century.[55][56] Charles Brooke a little later developed similar instruments for the Greenwich Observatory.[57]
|
141 |
+
|
142 |
+
Science uses image technology that has derived from the design of the Pin Hole camera. X-Ray machines are similar in design to Pin Hole cameras with high-grade filters and laser radiation.[58]
|
143 |
+
Photography has become universal in recording events and data in science and engineering, and at crime scenes or accident scenes. The method has been much extended by using other wavelengths, such as infrared photography and ultraviolet photography, as well as spectroscopy. Those methods were first used in the Victorian era and improved much further since that time.[59]
|
144 |
+
|
145 |
+
The first photographed atom was discovered in 2012 by physicists at Griffith University, Australia. They used an electric field to trap an "Ion" of the element, Ytterbium. The image was recorded on a CCD, an electronic photographic film.[60]
|
146 |
+
|
147 |
+
Wildlife photography involves capturing images of various forms of wildlife . Unlike other forms of photography such as product or food photography, successful wildlife photography requires a photographer to choose the right place and right time when specific wildlife are present and active. It often requires great patience and considerable skill and command of the right photographic equipment.[61]
|
148 |
+
|
149 |
+
There are many ongoing questions about different aspects of photography. In her On Photography (1977), Susan Sontag dismisses the objectivity of photography. This is a highly debated subject within the photographic community.[62] Sontag argues, "To photograph is to appropriate the thing photographed. It means putting one's self into a certain relation to the world that feels like knowledge, and therefore like power."[63] Photographers decide what to take a photo of, what elements to exclude and what angle to frame the photo, and these factors may reflect a particular socio-historical context. Along these lines, it can be argued that photography is a subjective form of representation.
|
150 |
+
|
151 |
+
Modern photography has raised a number of concerns on its effect on society. In Alfred Hitchcock's Rear Window (1954), the camera is presented as promoting voyeurism. 'Although the camera is an observation station, the act of photographing is more than passive observing'.[63]
|
152 |
+
|
153 |
+
The camera doesn't rape or even possess, though it may presume, intrude, trespass, distort, exploit, and, at the farthest reach of metaphor, assassinate – all activities that, unlike the sexual push and shove, can be conducted from a distance, and with some detachment.[63]
|
154 |
+
|
155 |
+
Digital imaging has raised ethical concerns because of the ease of manipulating digital photographs in post-processing. Many photojournalists have declared they will not crop their pictures or are forbidden from combining elements of multiple photos to make "photomontages", passing them as "real" photographs. Today's technology has made image editing relatively simple for even the novice photographer. However, recent changes of in-camera processing allow digital fingerprinting of photos to detect tampering for purposes of forensic photography.
|
156 |
+
|
157 |
+
Photography is one of the new media forms that changes perception and changes the structure of society.[64] Further unease has been caused around cameras in regards to desensitization. Fears that disturbing or explicit images are widely accessible to children and society at large have been raised. Particularly, photos of war and pornography are causing a stir. Sontag is concerned that "to photograph is to turn people into objects that can be symbolically possessed." Desensitization discussion goes hand in hand with debates about censored images. Sontag writes of her concern that the ability to censor pictures means the photographer has the ability to construct reality.[63]
|
158 |
+
|
159 |
+
One of the practices through which photography constitutes society is tourism. Tourism and photography combine to create a "tourist gaze"[65]
|
160 |
+
in which local inhabitants are positioned and defined by the camera lens. However, it has also been argued that there exists a "reverse gaze"[66] through which indigenous photographees can position the tourist photographer as a shallow consumer of images.
|
161 |
+
|
162 |
+
Additionally, photography has been the topic of many songs in popular culture.
|
163 |
+
|
164 |
+
Photography is both restricted as well as protected by the law in many jurisdictions. Protection of photographs is typically achieved through the granting of copyright or moral rights to the photographer. In the United States, photography is protected as a First Amendment right and anyone is free to photograph anything seen in public spaces as long as it is in plain view.[67] In the UK a recent law (Counter-Terrorism Act 2008) increases the power of the police to prevent people, even press photographers, from taking pictures in public places.[68] In South Africa, any person may photograph any other person, without their permission, in public spaces and the only specific restriction placed on what may not be photographed by government is related to anything classed as national security. Each country has different laws.[69]
|
en/4720.html.txt
ADDED
@@ -0,0 +1,172 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Pompeii (/pɒmˈpeɪ(i)/, Latin: [pɔmˈpeːjjiː]) was an ancient city located in what is now the comune of Pompei near Naples in the Campania region of Italy. Pompeii, along with Herculaneum and many villas in the surrounding area (e.g. at Boscoreale, Stabiae), was buried under 4 to 6 m (13 to 20 ft) of volcanic ash and pumice in the eruption of Mount Vesuvius in AD 79.
|
4 |
+
|
5 |
+
Largely preserved under the ash, the excavated city offered a unique snapshot of Roman life, frozen at the moment it was buried,[1] and an extraordinarily detailed insight into the everyday life of its inhabitants, although much of the evidence was lost in the early excavations. It was a wealthy town, enjoying many fine public buildings and luxurious private houses with lavish decorations, furnishings and works of art which were the main attractions for the early excavators. Organic remains, including wooden objects and human bodies, were entombed in the ash and decayed leaving
|
6 |
+
voids which archaeologists found could be used as moulds to make plaster casts of unique and often gruesome figures in their final moments of life. The numerous graffiti carved on the walls and inside rooms provide a wealth of examples of the largely lost Vulgar Latin spoken colloquially at the time, contrasting with the formal language of the classical writers.
|
7 |
+
|
8 |
+
Pompeii is a UNESCO World Heritage Site and is one of the most popular tourist attractions in Italy, with approximately 2.5 million visitors annually.[2]
|
9 |
+
|
10 |
+
After many excavations prior to 1960 that had uncovered most of the city but left it in decay,[3] further major excavations were banned and instead they were limited to targeted, prioritised areas. In 2018, these led to new discoveries in some previously unexplored areas of the city.[4][5][6][7]
|
11 |
+
|
12 |
+
Pompeii (pronounced [pɔmˈpɛjjiː]) in Latin is a second declension plural noun (Pompeiī, -ōrum). According to Theodor Kraus, "The root of the word Pompeii would appear to be the Oscan word for the number five, pompe, which suggests that either the community consisted of five hamlets or perhaps it was settled by a family group (gens Pompeia)."[8]
|
13 |
+
|
14 |
+
Pompeii was built about 40 metres (130 ft) above sea level on a coastal lava plateau created by earlier eruptions of Mount Vesuvius, (8 km (5.0 mi) distant). The plateau fell steeply to the south and partly the west and into the sea. Three sheets of sediment from large landslides lie on top of the lava, perhaps triggered by extended rainfall.[9] The city bordered the coastline, though today it is 700 metres (2,300 ft) away. The mouth of the navigable Sarno River, adjacent to the city, was protected by lagoons and served early Greek and Phoenician sailors as a safe haven and port which was developed further by the Romans.
|
15 |
+
|
16 |
+
Pompeii covered a total of 64 to 67 hectares (160 to 170 acres) and was home to 11,000 to 11,500 people, based on household counts.[10]
|
17 |
+
|
18 |
+
Although best known for its Roman remains visible today dating from AD 79, it was built upon a substantial city dating from much earlier times. Expansion of the city from an early nucleus (the old town) accelerated already from 450 BC under the Greeks after the battle of Cumae.[11]
|
19 |
+
|
20 |
+
The first stable settlements on the site date back to the 8th century BC when the Oscans,[12] a people of central Italy, founded five villages in the area.
|
21 |
+
|
22 |
+
With the arrival of the Greeks in Campania from around 740 BC, Pompeii entered the orbit of the Hellenic people and the most important building of this period is the Doric Temple, built away from the centre in what would later become the Triangular Forum.[13]:62 At the same time the cult of Apollo was introduced.[14] Greek and Phoenician sailors used the location as a safe port.
|
23 |
+
|
24 |
+
In the early 6th century BC, the settlement merged into a single community centred on the important crossroad between Cumae, Nola, and Stabiae and was surrounded by a tufa city wall (the pappamonte wall).[15][16] The first wall (which was also used as a base for the later wall) unusually enclosed a much greater area than the early town together with much agricultural land.[17] That such an impressive wall was built at this time indicates that the settlement was already important and wealthy. The city began to flourish and maritime trade started with the construction of a small port near the mouth of the river.[13] The earliest settlement was focused in regions VII and VIII of the town (the old town) as identified from stratigraphy below the Samnite and Roman buildings, as well as from the different and irregular street plan.
|
25 |
+
|
26 |
+
In 524 BC, the Etruscans arrived and settled in the area, including Pompeii, finding in the River Sarno a communication route between the sea and the interior. Like the Greeks, the Etruscans did not conquer the city militarily, but simply controlled it and Pompeii enjoyed a sort of autonomy.[13]:63 Nevertheless, Pompeii became a member of the Etruscan League of cities.[18] Excavations in 1980–1981 have shown the presence of Etruscan inscriptions and a 6th-century BC necropolis.[19] Under the Etruscans a primitive forum or simple market square was built, as well as the Temple of Apollo, in both of which objects including fragments of bucchero were found by Maiuri.[20] Several houses were built with the so-called Tuscan atrium, typical of this people.[13]:64
|
27 |
+
|
28 |
+
The city wall was strengthened in the early-5th century BC with two façades of relatively thin, vertically set, slabs of Sarno limestone some four metres apart filled with earth (the orthostate wall).[21]
|
29 |
+
|
30 |
+
In 474 BC the Greek city of Cumae, allied with Syracuse, defeated the Etruscans at the Battle of Cumae and gained control of the area.
|
31 |
+
|
32 |
+
The period between about 450–375 BC witnessed large areas of the city being abandoned while important sanctuaries such as the Temple of Apollo show a sudden lack of votive material remains.[22]
|
33 |
+
|
34 |
+
The Samnites, people from the areas of Abruzzo and Molise, and allies of the Romans, conquered Greek Cumae between 423 and 420 BC and it is likely that all the surrounding territory, including Pompeii, was already conquered around 424 BC. The new rulers gradually imposed their architecture and enlarged the town.
|
35 |
+
|
36 |
+
From 343–341 BC in the Samnite Wars, the first Roman army entered the Campanian plain bringing with it the customs and traditions of Rome, and in the Roman Latin War from 340 BC the Samnites were faithful to Rome. Pompeii, although governed by the Samnites, entered the Roman orbit, to which it remained faithful even during the third Samnite war and in the war against Pyrrhus. In the late 4th century BC the city began to expand from its nucleus and into the open walled area. The street plan of the new areas was more regular and more conformal to Hippodamus's street plan. The city walls were reinforced in Sarno stone in the early 3rd century BC (the limestone enceinte, or the "first Samnite wall"). It formed the basis for the currently visible walls with an outer wall of rectangular limestone blocks as a terrace wall supporting a large agger, or earth embankment, behind it.
|
37 |
+
|
38 |
+
After the Samnite Wars from 290 BC, Pompeii was forced to accept the status of socii of Rome, maintaining, however, linguistic and administrative autonomy.
|
39 |
+
|
40 |
+
From the outbreak of the Second Punic War (218–201 BC) in which Pompeii remained faithful to Rome, an additional internal wall was built of tufa and the internal agger and outer façade raised resulting in a double parapet with wider wall-walk.[13] Despite the political uncertainty of these events and the progressive migration of wealthy men to quieter cities in the eastern Mediterranean, Pompeii continued to flourish due to the production and trade of wine and oil with places like Provence and Spain,[23] as well as to intensive agriculture on farms around the city.
|
41 |
+
|
42 |
+
In the 2nd century BC, Pompeii enriched itself by taking part in Rome's conquest of the east as shown by a statue of Apollo in the Forum erected by Lucius Mummius in gratitude for their support in the sack of Corinth and the eastern campaigns. These riches enabled Pompeii to bloom and expand to its ultimate limits. The forum and many public and private buildings of high architectural quality were built, including Teatro Grande, the Temple of Jupiter, the Basilica, the Comitium, the Stabian Baths and a new two-story portico.[24]
|
43 |
+
|
44 |
+
Pompeii was one of the towns of Campania that rebelled against Rome in the Social Wars and in 89 BC it was besieged by Sulla, who targeted the strategically vulnerable Porta Ercolano with his artillery as can still be seen by the impact craters of thousands of ballista shots in the walls. Many nearby buildings inside the walls were also destroyed.[25] Although the battle-hardened troops of the Social League, headed by Lucius Cluentius, helped in resisting the Romans, Pompeii was forced to surrender after the conquest of Nola.
|
45 |
+
|
46 |
+
The result was that Pompeii became a Roman colony with the name of Colonia Cornelia Veneria Pompeianorum. Many of Sulla's veterans were given land and property in and around the city, while many of those who opposed Rome were dispossessed of their property. Despite this, the Pompeians were granted Roman citizenship and they were quickly assimilated into the Roman world. The main language in the city became Latin,[26] and many of Pompeii's old aristocratic families Latinized their names as a sign of assimilation.[27]
|
47 |
+
|
48 |
+
The city became an important passage for goods that arrived by sea and had to be sent toward Rome or Southern Italy along the nearby Appian Way. Many public buildings were built or refurbished and improved under the new order; new buildings included the Amphitheatre of Pompeii in 70 BC, the Forum Baths, and the Odeon, while the forum was embellished with the colonnade of Popidius before 80 BC.[28] These buildings raised the status of Pompeii as a cultural centre in the region as it outshone its neighbours in the number of places for entertainment which significantly enhanced the social and economic development of the city.
|
49 |
+
|
50 |
+
Under Augustus, from about 30 BC a major expansion in new public buildings, as in the rest of the empire, included the Eumachia Building, the Sanctuary of Augustus and the Macellum. From about 20 BC, Pompeii was fed with running water by a spur from the Serino Aqueduct, built by Marcus Vipsanius Agrippa.
|
51 |
+
|
52 |
+
In AD 59, there was a serious riot and bloodshed in the amphitheatre between Pompeians and Nucerians (which is recorded in a fresco) and which led the Roman senate to send the Praetorian Guard to restore order and to ban further events for a period of ten years.[29][30]
|
53 |
+
|
54 |
+
The inhabitants of Pompeii had long been used to minor earthquakes (indeed, the writer Pliny the Younger wrote that earth tremors "were not particularly alarming because they are frequent in Campania"), but on 5 February 62[31] a severe earthquake did considerable damage around the bay, and particularly to Pompeii. It is believed that the earthquake would have registered between about 5 and 6 on the Richter magnitude scale.[32]
|
55 |
+
|
56 |
+
On that day in Pompeii, there were to be two sacrifices, as it was the anniversary of Augustus being named "Father of the Nation" and also a feast day to honour the guardian spirits of the city. Chaos followed the earthquake; fires caused by oil lamps that had fallen during the quake added to the panic. The nearby cities of Herculaneum and Nuceria were also affected.[32]
|
57 |
+
|
58 |
+
Between 62 and the eruption in 79 most rebuilding was done in the private sector and older, damaged frescoes were often covered with newer ones, for example. In the public sector the opportunity was taken to improve buildings and the city plan e.g. in the forum.[33]
|
59 |
+
|
60 |
+
An important field of current research concerns structures that were restored between the earthquake of 62 and the eruption. It was thought until recently that some of the damage had still not been repaired at the time of the eruption but this has been shown to be doubtful as the evidence of missing forum statues and marble wall-veneers are most likely due to robbers after the city's burial.[34][35] The public buildings on the east side of the forum were largely restored and were even enhanced by beautiful marble veneers and other modifications to the architecture.[36]
|
61 |
+
|
62 |
+
Some buildings like the Central Baths were only started after the earthquake and were built to enhance the city with modern developments in their architecture, as had been done in Rome, in terms of wall-heating and window glass, and with well-lit spacious rooms. The new baths took over a whole insula by demolishing houses, which may have been made easier by the earthquake that had damaged these houses. This shows that the city was still flourishing rather than struggling to recover from the earthquake.[37]
|
63 |
+
|
64 |
+
In about 64, Nero and his wife Poppaea visited Pompeii and made gifts to the temple of Venus, probably when he performed in the theatre of Naples.[38]
|
65 |
+
|
66 |
+
By 79, Pompeii had a population of 20,000,[39] which had prospered from the region's renowned agricultural fertility and favourable location.
|
67 |
+
|
68 |
+
The eruption lasted for two days.[40] The first phase was of pumice rain (lapilli) lasting about 18 hours, allowing most inhabitants to escape. That only approximately 1,150 bodies[41] have so far been found on site seems to confirm this theory and most escapees probably managed to salvage some of their most valuable belongings; many skeletons were found with jewellery, coins and silverware.
|
69 |
+
|
70 |
+
At some time in the night or early the next day, pyroclastic flows began near the volcano, consisting of high speed, dense, and very hot ash clouds, knocking down wholly or partly all structures in their path, incinerating or suffocating the remaining population and altering the landscape, including the coastline. By evening of the second day, the eruption was over, leaving only haze in the atmosphere through which the sun shone weakly.
|
71 |
+
|
72 |
+
A multidisciplinary volcanological and bio-anthropological study[42] of the eruption products and victims, merged with numerical simulations and experiments, indicates that at Pompeii and surrounding towns heat was the main cause of death of people, previously believed to have died by ash suffocation. The results of the study, published in 2010, show that exposure to at least 250 °C (480 °F) hot pyroclastic flows at a distance of 10 kilometres (6 miles) from the vent was sufficient to cause instant death, even if people were sheltered within buildings. The people and buildings of Pompeii were covered in up to twelve different layers of tephra, in total up to 6 metres (19.7 ft) deep.
|
73 |
+
|
74 |
+
Pliny the Younger provided a first-hand account of the eruption of Mount Vesuvius from his position across the Bay of Naples at Misenum but written 25 years after the event.[43] His uncle, Pliny the Elder, with whom he had a close relationship, died while attempting to rescue stranded victims. As admiral of the fleet, Pliny the Elder had ordered the ships of the Imperial Navy stationed at Misenum to cross the bay to assist evacuation attempts. Volcanologists have recognised the importance of Pliny the Younger's account of the eruption by calling similar events "Plinian". It had long been thought that the eruption was an August event based on one version of the letter but another version[44] gives a date of the eruption as late as 23 November. A later date is consistent with a charcoal inscription at the site, discovered in 2018, which includes the date of 17 October and which must have been recently written.[45]
|
75 |
+
|
76 |
+
Clear support for an October/November eruption is found in the fact that people buried in the ash appear to have been wearing heavier clothing than the light summer clothes typical of August. The fresh fruit and vegetables in the shops are typical of October – and conversely the summer fruit typical of August was already being sold in dried, or conserved form. Nuts from chestnut trees were found at Oplontis which would not have been mature before mid-September.[46] Wine fermenting jars had been sealed, which would have happened around the end of October. Coins found in the purse of a woman buried in the ash include one with a 15th imperatorial acclamation among the emperor's titles. These coins could not have been minted before the second week of September.[44]
|
77 |
+
|
78 |
+
Titus appointed two ex-consuls to organise a relief effort, while donating large amounts of money from the imperial treasury to aid the victims of the volcano.[47] He visited Pompeii once after the eruption and again the following year[48] but no work was done on recovery.
|
79 |
+
|
80 |
+
Soon after the burial of the city, survivors and possibly thieves came to salvage valuables, including the marble statues from the forum and other precious materials from buildings. There is wide evidence of post-eruption disturbance, including holes made through walls. The city was not completely buried, and tops of larger buildings would have been above the ash making it obvious where to dig or salvage building material.[49] The robbers left traces of their passage, as in a house where modern archaeologists found a wall graffito saying "house dug".[50]
|
81 |
+
|
82 |
+
Over the following centuries, its name and location were forgotten, though it still appeared on the Tabula Peutingeriana of the 4th century. Further eruptions particularly in 471–473 and 512 covered the remains more deeply. The area became known as the La Civita (the city) due to the features in the ground.[51]
|
83 |
+
|
84 |
+
The next known date that any part was unearthed was in 1592, when architect Domenico Fontana while digging an underground aqueduct to the mills of Torre Annunziata ran into ancient walls covered with paintings and inscriptions. His aqueduct passed through and under a large part of the city[52] and would have had to pass though many buildings and foundations, as still can be seen in many places today, but he kept quiet and nothing more came of the discovery.
|
85 |
+
|
86 |
+
In 1689, Francesco Picchetti saw a wall inscription mentioning decurio Pompeiis ("town councillor of Pompeii"), but he associated it with a villa of Pompey. Franceso Bianchini pointed out the true meaning and he was supported by Giuseppe Macrini, who in 1693 excavated some walls and wrote that Pompeii lay beneath La Civita.[53]
|
87 |
+
|
88 |
+
Herculaneum itself was rediscovered in 1738 by workmen digging for the foundations of a summer palace for the King of Naples, Charles of Bourbon. Due to the spectacular quality of the finds, the Spanish military engineer Rocque Joaquin de Alcubierre made excavations to find further remains at the site of Pompeii in 1748, even if the city was not identified.[54] Charles of Bourbon took great interest in the finds, even after leaving to become king of Spain, because the display of antiquities reinforced the political and cultural prestige of Naples.[55] On 20 August 1763, an inscription [...] Rei Publicae Pompeianorum [...] was found and the city was identified as Pompeii.[56]
|
89 |
+
|
90 |
+
Karl Weber directed the first scientific excavations.[57] He was followed in 1764 by military engineer Franscisco la Vega, who was succeeded by his brother, Pietro, in 1804.[58]
|
91 |
+
|
92 |
+
There was much progress in exploration when the French occupied Naples in 1799 and ruled over Italy from 1806 to 1815. The land on which Pompeii lies was expropriated and up to 700 workers were used in the excavations. The excavated areas in the north and south were connected. Parts of the Via dell'Abbondanza were also exposed in west–east direction and for the first time an impression of the size and appearance of the ancient town could be appreciated. In the following years, the excavators struggled with lack of money and excavations progressed slowly, but with significant finds such as the houses of the Faun, of Menandro, of the Tragic Poet and of the Surgeon.
|
93 |
+
|
94 |
+
Giuseppe Fiorelli took charge of the excavations in 1863 and made greater progress.[59] During early excavations of the site, occasional voids in the ash layer had been found that contained human remains. It was Fiorelli who realised these were spaces left by the decomposed bodies and so devised the technique of injecting plaster into them to recreate the forms of Vesuvius's victims. This technique is still in use today, with a clear resin now used instead of plaster because it is more durable, and does not destroy the bones, allowing further analysis.[60]
|
95 |
+
|
96 |
+
Fiorelli also introduced scientific documentation. He divided the city into the present nine areas (regiones) and blocks (insulae) and numbered the entrances of the individual houses (domus), so that each is identified by these three numbers. Fiorelli also published the first periodical with excavation reports. Under Fiorelli's successors the entire west of the city was exposed.
|
97 |
+
|
98 |
+
In the 1920s, Amedeo Maiuri excavated for the first time in older layers than that of 79 AD in order to learn about the settlement history. Maiuri made the last excavations on a grand scale in the 1950s, and the area south of the Via dell'Abbondanza and the city wall was almost completely uncovered, but they were poorly documented scientifically. Preservation was haphazard and presents today's archaeologists with great difficulty. Questionable reconstruction was done in the 1980s and 1990s after the severe earthquake of 1980, which caused great destruction. Since then, except for targeted soundings and excavations, work was confined to the excavated areas. Further excavations on a large scale are not planned and today archaeologists try to reconstruct, to document and, above all, to stop the ever faster decay.
|
99 |
+
|
100 |
+
Under the 'Great Pompeii Project' over 2.5 km of ancient walls are being relieved of danger of collapse by treating the unexcavated areas behind the street fronts in order to increase drainage and reduce the pressure of ground water and earth on the walls, a problem especially in the rainy season. As of August 2019, these excavations have resumed on unexcavated areas of Regio V.[61]
|
101 |
+
|
102 |
+
Objects buried beneath Pompeii were well-preserved for almost 2,000 years as the lack of air and moisture allowed little to no deterioration. However, once exposed, Pompeii has been subject to both natural and man-made forces, which have rapidly increased deterioration.
|
103 |
+
|
104 |
+
Weathering, erosion, light exposure, water damage, poor methods of excavation and reconstruction, introduced plants and animals, tourism, vandalism and theft have all damaged the site in some way. The lack of adequate weather protection of all but the most interesting and important buildings has allowed original interior decoration to fade or be lost. Two-thirds of the city has been excavated, but the remnants of the city are rapidly deteriorating.[62]
|
105 |
+
|
106 |
+
Furthermore, during World War II many buildings were badly damaged or destroyed by bombs dropped in several raids by the Allied forces.[63]
|
107 |
+
|
108 |
+
The concern for conservation has continually troubled archaeologists. The ancient city was included in the 1996 World Monuments Watch by the World Monuments Fund, and again in 1998 and in 2000. In 1996 the organisation claimed that Pompeii "desperately need[ed] repair" and called for the drafting of a general plan of restoration and interpretation.[64] The organisation supported conservation at Pompeii with funding from American Express and the Samuel H. Kress Foundation.[65]
|
109 |
+
|
110 |
+
Today, funding is mostly directed into conservation of the site; however, due to the expanse of Pompeii and the scale of the problems, this is inadequate in halting the slow decay of the materials. A 2012 study recommended an improved strategy for interpretation and presentation of the site as a cost-effective method of improving its conservation and preservation in the short term.[66]
|
111 |
+
|
112 |
+
In June 2013, UNESCO declared: If restoration and preservation works "fail to deliver substantial progress in the next two years," Pompeii could be placed on the List of World Heritage in Danger.[67]
|
113 |
+
|
114 |
+
The 2,000-year-old Schola Armatorum ('House of the Gladiators') collapsed on 6 November 2010. The structure was not open to visitors, but the outside was visible to tourists. There was no immediate determination as to what caused the building to collapse, although reports suggested water infiltration following heavy rains might have been responsible. There has been fierce controversy after the collapse, with accusations of neglect.[68][69]
|
115 |
+
|
116 |
+
Under the Romans after the conquest by Sulla in 89 BC, Pompeii underwent a process of urban development which accelerated in the Augustan period from about 30 BC. New public buildings include the amphitheatre with palaestra or gymnasium with a central natatorium (cella natatoria) or swimming pool, two theatres, the Eumachia Building and at least four public baths. The amphitheatre has been cited by scholars as a model of sophisticated design, particularly in the area of crowd control.[70]
|
117 |
+
|
118 |
+
Other service buildings were the Macellum ("meat market"); the Pistrinum ("mill"); the Thermopolium (a fast food place that served hot and cold dishes and beverages), and cauponae ("cafes" or "dives" with a seedy reputation as hangouts for thieves and prostitutes). At least one building, the Lupanar, was dedicated to prostitution.[71] A large hotel or hospitium (of 1,000 square metres) was found at Murecine, a short distance from Pompeii, when the Naples-Salerno motorway was being built, and the Murecine Silver Treasure and the Tablets providing a unique record of business transactions were discovered.[72][73]
|
119 |
+
|
120 |
+
An aqueduct provided water to the public baths, to more than 25 street fountains, and to many private houses (domūs) and businesses. The aqueduct was a branch of the great Serino Aqueduct built to serve the other large towns in the Bay of Naples region and the important naval base at Misenum. The castellum aquae is well preserved, and includes many details of the distribution network and its controls.[74]
|
121 |
+
|
122 |
+
Modern archaeologists have excavated garden sites and urban domains to reveal the agricultural staples of Pompeii's economy. Pompeii was fortunate to have had fertile soil for crop cultivation. The soils surrounding Mount Vesuvius preceding its eruption have been revealed to have had good water-retention capabilities, implying productive agriculture. The Tyrrhenian Sea's airflow provided hydration to the soil despite the hot, dry climate.[75] Barley, wheat, and millet were all produced along with wine and olive oil, in abundance for export to other regions.[76]
|
123 |
+
|
124 |
+
Evidence of wine imported nationally from Pompeii in its most prosperous years can be found from recovered artefacts such as wine bottles in Rome.[76] For this reason, vineyards were of utmost importance to Pompeii's economy. Agricultural policymaker Columella suggested that each vineyard in Rome produced a quota of three cullei of wine per jugerum, otherwise the vineyard would be uprooted. The nutrient-rich lands near Pompeii were extremely efficient at this and were often able to exceed these requirements by a steep margin, therefore providing the incentive for local wineries to establish themselves.[76] While wine was exported for Pompeii's economy, the majority of the other agricultural goods were likely produced in quantities sufficient for the city's consumption.
|
125 |
+
|
126 |
+
Remains of large formations of constructed wineries were found in the Forum Boarium, covered by cemented casts from the eruption of Vesuvius.[76] It is speculated that these historical vineyards are strikingly similar in structure to the modern day vineyards across Italy.
|
127 |
+
|
128 |
+
Carbonised food plant remains, roots, seeds and pollens, have been found from gardens in Pompeii, Herculaneum, and from the Roman villa at Torre Annunziata. They revealed that emmer wheat, Italian millet, common millet, walnuts, pine nuts, chestnuts, hazel nuts, chickpeas, bitter vetch, broad beans, olives, figs, pears, onions, garlic, peaches, carob, grapes, and dates were consumed. All but the dates could have been produced locally.[77]
|
129 |
+
|
130 |
+
Town houses:
|
131 |
+
|
132 |
+
Exterior villas:
|
133 |
+
|
134 |
+
Other:
|
135 |
+
|
136 |
+
The discovery of erotic art in Pompeii and Herculaneum left the archaeologists with a dilemma stemming from the clash of cultures between the mores of sexuality in ancient Rome and in Counter-Reformation Europe. An unknown number of discoveries were hidden away again. A wall fresco depicting Priapus, the ancient god of sex and fertility, with his grotesquely enlarged penis, was covered with plaster. An older reproduction was locked away "out of prudishness" and opened only on request – and only rediscovered in 1998 due to rainfall.[78] In 2018, an ancient fresco depicting an erotic scene of "Leda and the Swan" was discovered at Pompeii.[79]
|
137 |
+
|
138 |
+
Many artefacts from the buried cities are preserved in the Naples National Archaeological Museum. In 1819, when King Francis visited the Pompeii exhibition there with his wife and daughter, he was so embarrassed by the erotic artwork that he had it locked away in a "secret cabinet" (gabinetto segreto), a gallery within the museum accessible only to "people of mature age and respected morals". Re-opened, closed, re-opened again and then closed again for nearly 100 years, the Naples "Secret Museum" was briefly made accessible again at the end of the 1960s (the time of the sexual revolution) and was finally re-opened for viewing in 2000. Minors are still allowed entry only in the presence of a guardian or with written permission.[80]
|
139 |
+
|
140 |
+
Pompeii has been a popular tourist destination for over 250 years;[81] it was on the Grand Tour. By 2008, it was attracting almost 2.6 million visitors per year, making it one of the most popular tourist sites in Italy.[82] It is part of a larger Vesuvius National Park and was declared a World Heritage Site by UNESCO in 1997. To combat problems associated with tourism, the governing body for Pompeii, the 'Soprintendenza Archeologica di Pompei', have begun issuing new tickets that allow tourists to visit cities such as Herculaneum and Stabiae as well as the Villa Poppaea, to encourage visitors to see these sites and reduce pressure on Pompeii.
|
141 |
+
|
142 |
+
Pompeii is a driving force behind the economy of the nearby town of Pompei. Many residents are employed in the tourism and hospitality industry, serving as taxi or bus drivers, waiters, or hotel staff.[citation needed]
|
143 |
+
|
144 |
+
Excavations at the site have generally ceased due to a moratorium imposed by the superintendent of the site, Professor Pietro Giovanni Guzzo. The site is generally less accessible to tourists than in the past, with less than a third of all buildings open in the 1960s being available for public viewing today.
|
145 |
+
|
146 |
+
The 1954 film, Journey to Italy, starring George Sanders and Ingrid Bergman, includes a scene at Pompeii in which they witness the excavation of a cast of a couple that perished in the eruption.
|
147 |
+
|
148 |
+
Pompeii was the setting for the British comedy television series Up Pompeii! and the movie of the series. Pompeii also featured in the second episode of the fourth season of revived BBC science fiction series Doctor Who, named "The Fires of Pompeii",[83] which featured Caecilius as a character.
|
149 |
+
|
150 |
+
In 1971, the rock band Pink Floyd filmed a live concert titled Pink Floyd: Live at Pompeii, in which they performed six songs in the ancient Roman amphitheatre in the city. The audience consisted only of the film's production crew and some local children.
|
151 |
+
|
152 |
+
Siouxsie and the Banshees wrote and recorded the punk-inflected dance song "Cities in Dust", which describes the disaster that befell Pompeii and Herculaneum in AD 79. The song appears on their album Tinderbox, released in 1985, on Polydor Records. The jacket of the single remix of the song features the plaster cast of the chained dog killed in Pompeii.
|
153 |
+
|
154 |
+
Pompeii is a novel written by Robert Harris (published in 2003) featuring the account of the aquarius's race to fix the broken aqueduct in the days leading up to the eruption of Vesuvius, inspired by actual events and people.
|
155 |
+
|
156 |
+
"Pompeii" is a song by the British band Bastille, released 24 February 2013. The lyrics refer to the city and the eruption of Mount Vesuvius.
|
157 |
+
|
158 |
+
Pompeii is a 2014 German-Canadian historical disaster film produced and directed by Paul W. S. Anderson.[84]
|
159 |
+
|
160 |
+
In 2016, 45 years after the Pink Floyd recordings, band guitarist David Gilmour returned to the Pompeii amphitheatre to perform a live concert for his Rattle That Lock Tour. This event was considered the first in the amphitheatre to feature an audience since the AD 79 eruption of Vesuvius.[85][86]
|
161 |
+
|
162 |
+
The Basilica
|
163 |
+
|
164 |
+
Fresco from the Villa dei Misteri
|
165 |
+
|
166 |
+
The Forum
|
167 |
+
|
168 |
+
The Temple of Apollo
|
169 |
+
|
170 |
+
The House of the Faun
|
171 |
+
|
172 |
+
The Forum
|
en/4721.html.txt
ADDED
@@ -0,0 +1,172 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Pompeii (/pɒmˈpeɪ(i)/, Latin: [pɔmˈpeːjjiː]) was an ancient city located in what is now the comune of Pompei near Naples in the Campania region of Italy. Pompeii, along with Herculaneum and many villas in the surrounding area (e.g. at Boscoreale, Stabiae), was buried under 4 to 6 m (13 to 20 ft) of volcanic ash and pumice in the eruption of Mount Vesuvius in AD 79.
|
4 |
+
|
5 |
+
Largely preserved under the ash, the excavated city offered a unique snapshot of Roman life, frozen at the moment it was buried,[1] and an extraordinarily detailed insight into the everyday life of its inhabitants, although much of the evidence was lost in the early excavations. It was a wealthy town, enjoying many fine public buildings and luxurious private houses with lavish decorations, furnishings and works of art which were the main attractions for the early excavators. Organic remains, including wooden objects and human bodies, were entombed in the ash and decayed leaving
|
6 |
+
voids which archaeologists found could be used as moulds to make plaster casts of unique and often gruesome figures in their final moments of life. The numerous graffiti carved on the walls and inside rooms provide a wealth of examples of the largely lost Vulgar Latin spoken colloquially at the time, contrasting with the formal language of the classical writers.
|
7 |
+
|
8 |
+
Pompeii is a UNESCO World Heritage Site and is one of the most popular tourist attractions in Italy, with approximately 2.5 million visitors annually.[2]
|
9 |
+
|
10 |
+
After many excavations prior to 1960 that had uncovered most of the city but left it in decay,[3] further major excavations were banned and instead they were limited to targeted, prioritised areas. In 2018, these led to new discoveries in some previously unexplored areas of the city.[4][5][6][7]
|
11 |
+
|
12 |
+
Pompeii (pronounced [pɔmˈpɛjjiː]) in Latin is a second declension plural noun (Pompeiī, -ōrum). According to Theodor Kraus, "The root of the word Pompeii would appear to be the Oscan word for the number five, pompe, which suggests that either the community consisted of five hamlets or perhaps it was settled by a family group (gens Pompeia)."[8]
|
13 |
+
|
14 |
+
Pompeii was built about 40 metres (130 ft) above sea level on a coastal lava plateau created by earlier eruptions of Mount Vesuvius, (8 km (5.0 mi) distant). The plateau fell steeply to the south and partly the west and into the sea. Three sheets of sediment from large landslides lie on top of the lava, perhaps triggered by extended rainfall.[9] The city bordered the coastline, though today it is 700 metres (2,300 ft) away. The mouth of the navigable Sarno River, adjacent to the city, was protected by lagoons and served early Greek and Phoenician sailors as a safe haven and port which was developed further by the Romans.
|
15 |
+
|
16 |
+
Pompeii covered a total of 64 to 67 hectares (160 to 170 acres) and was home to 11,000 to 11,500 people, based on household counts.[10]
|
17 |
+
|
18 |
+
Although best known for its Roman remains visible today dating from AD 79, it was built upon a substantial city dating from much earlier times. Expansion of the city from an early nucleus (the old town) accelerated already from 450 BC under the Greeks after the battle of Cumae.[11]
|
19 |
+
|
20 |
+
The first stable settlements on the site date back to the 8th century BC when the Oscans,[12] a people of central Italy, founded five villages in the area.
|
21 |
+
|
22 |
+
With the arrival of the Greeks in Campania from around 740 BC, Pompeii entered the orbit of the Hellenic people and the most important building of this period is the Doric Temple, built away from the centre in what would later become the Triangular Forum.[13]:62 At the same time the cult of Apollo was introduced.[14] Greek and Phoenician sailors used the location as a safe port.
|
23 |
+
|
24 |
+
In the early 6th century BC, the settlement merged into a single community centred on the important crossroad between Cumae, Nola, and Stabiae and was surrounded by a tufa city wall (the pappamonte wall).[15][16] The first wall (which was also used as a base for the later wall) unusually enclosed a much greater area than the early town together with much agricultural land.[17] That such an impressive wall was built at this time indicates that the settlement was already important and wealthy. The city began to flourish and maritime trade started with the construction of a small port near the mouth of the river.[13] The earliest settlement was focused in regions VII and VIII of the town (the old town) as identified from stratigraphy below the Samnite and Roman buildings, as well as from the different and irregular street plan.
|
25 |
+
|
26 |
+
In 524 BC, the Etruscans arrived and settled in the area, including Pompeii, finding in the River Sarno a communication route between the sea and the interior. Like the Greeks, the Etruscans did not conquer the city militarily, but simply controlled it and Pompeii enjoyed a sort of autonomy.[13]:63 Nevertheless, Pompeii became a member of the Etruscan League of cities.[18] Excavations in 1980–1981 have shown the presence of Etruscan inscriptions and a 6th-century BC necropolis.[19] Under the Etruscans a primitive forum or simple market square was built, as well as the Temple of Apollo, in both of which objects including fragments of bucchero were found by Maiuri.[20] Several houses were built with the so-called Tuscan atrium, typical of this people.[13]:64
|
27 |
+
|
28 |
+
The city wall was strengthened in the early-5th century BC with two façades of relatively thin, vertically set, slabs of Sarno limestone some four metres apart filled with earth (the orthostate wall).[21]
|
29 |
+
|
30 |
+
In 474 BC the Greek city of Cumae, allied with Syracuse, defeated the Etruscans at the Battle of Cumae and gained control of the area.
|
31 |
+
|
32 |
+
The period between about 450–375 BC witnessed large areas of the city being abandoned while important sanctuaries such as the Temple of Apollo show a sudden lack of votive material remains.[22]
|
33 |
+
|
34 |
+
The Samnites, people from the areas of Abruzzo and Molise, and allies of the Romans, conquered Greek Cumae between 423 and 420 BC and it is likely that all the surrounding territory, including Pompeii, was already conquered around 424 BC. The new rulers gradually imposed their architecture and enlarged the town.
|
35 |
+
|
36 |
+
From 343–341 BC in the Samnite Wars, the first Roman army entered the Campanian plain bringing with it the customs and traditions of Rome, and in the Roman Latin War from 340 BC the Samnites were faithful to Rome. Pompeii, although governed by the Samnites, entered the Roman orbit, to which it remained faithful even during the third Samnite war and in the war against Pyrrhus. In the late 4th century BC the city began to expand from its nucleus and into the open walled area. The street plan of the new areas was more regular and more conformal to Hippodamus's street plan. The city walls were reinforced in Sarno stone in the early 3rd century BC (the limestone enceinte, or the "first Samnite wall"). It formed the basis for the currently visible walls with an outer wall of rectangular limestone blocks as a terrace wall supporting a large agger, or earth embankment, behind it.
|
37 |
+
|
38 |
+
After the Samnite Wars from 290 BC, Pompeii was forced to accept the status of socii of Rome, maintaining, however, linguistic and administrative autonomy.
|
39 |
+
|
40 |
+
From the outbreak of the Second Punic War (218–201 BC) in which Pompeii remained faithful to Rome, an additional internal wall was built of tufa and the internal agger and outer façade raised resulting in a double parapet with wider wall-walk.[13] Despite the political uncertainty of these events and the progressive migration of wealthy men to quieter cities in the eastern Mediterranean, Pompeii continued to flourish due to the production and trade of wine and oil with places like Provence and Spain,[23] as well as to intensive agriculture on farms around the city.
|
41 |
+
|
42 |
+
In the 2nd century BC, Pompeii enriched itself by taking part in Rome's conquest of the east as shown by a statue of Apollo in the Forum erected by Lucius Mummius in gratitude for their support in the sack of Corinth and the eastern campaigns. These riches enabled Pompeii to bloom and expand to its ultimate limits. The forum and many public and private buildings of high architectural quality were built, including Teatro Grande, the Temple of Jupiter, the Basilica, the Comitium, the Stabian Baths and a new two-story portico.[24]
|
43 |
+
|
44 |
+
Pompeii was one of the towns of Campania that rebelled against Rome in the Social Wars and in 89 BC it was besieged by Sulla, who targeted the strategically vulnerable Porta Ercolano with his artillery as can still be seen by the impact craters of thousands of ballista shots in the walls. Many nearby buildings inside the walls were also destroyed.[25] Although the battle-hardened troops of the Social League, headed by Lucius Cluentius, helped in resisting the Romans, Pompeii was forced to surrender after the conquest of Nola.
|
45 |
+
|
46 |
+
The result was that Pompeii became a Roman colony with the name of Colonia Cornelia Veneria Pompeianorum. Many of Sulla's veterans were given land and property in and around the city, while many of those who opposed Rome were dispossessed of their property. Despite this, the Pompeians were granted Roman citizenship and they were quickly assimilated into the Roman world. The main language in the city became Latin,[26] and many of Pompeii's old aristocratic families Latinized their names as a sign of assimilation.[27]
|
47 |
+
|
48 |
+
The city became an important passage for goods that arrived by sea and had to be sent toward Rome or Southern Italy along the nearby Appian Way. Many public buildings were built or refurbished and improved under the new order; new buildings included the Amphitheatre of Pompeii in 70 BC, the Forum Baths, and the Odeon, while the forum was embellished with the colonnade of Popidius before 80 BC.[28] These buildings raised the status of Pompeii as a cultural centre in the region as it outshone its neighbours in the number of places for entertainment which significantly enhanced the social and economic development of the city.
|
49 |
+
|
50 |
+
Under Augustus, from about 30 BC a major expansion in new public buildings, as in the rest of the empire, included the Eumachia Building, the Sanctuary of Augustus and the Macellum. From about 20 BC, Pompeii was fed with running water by a spur from the Serino Aqueduct, built by Marcus Vipsanius Agrippa.
|
51 |
+
|
52 |
+
In AD 59, there was a serious riot and bloodshed in the amphitheatre between Pompeians and Nucerians (which is recorded in a fresco) and which led the Roman senate to send the Praetorian Guard to restore order and to ban further events for a period of ten years.[29][30]
|
53 |
+
|
54 |
+
The inhabitants of Pompeii had long been used to minor earthquakes (indeed, the writer Pliny the Younger wrote that earth tremors "were not particularly alarming because they are frequent in Campania"), but on 5 February 62[31] a severe earthquake did considerable damage around the bay, and particularly to Pompeii. It is believed that the earthquake would have registered between about 5 and 6 on the Richter magnitude scale.[32]
|
55 |
+
|
56 |
+
On that day in Pompeii, there were to be two sacrifices, as it was the anniversary of Augustus being named "Father of the Nation" and also a feast day to honour the guardian spirits of the city. Chaos followed the earthquake; fires caused by oil lamps that had fallen during the quake added to the panic. The nearby cities of Herculaneum and Nuceria were also affected.[32]
|
57 |
+
|
58 |
+
Between 62 and the eruption in 79 most rebuilding was done in the private sector and older, damaged frescoes were often covered with newer ones, for example. In the public sector the opportunity was taken to improve buildings and the city plan e.g. in the forum.[33]
|
59 |
+
|
60 |
+
An important field of current research concerns structures that were restored between the earthquake of 62 and the eruption. It was thought until recently that some of the damage had still not been repaired at the time of the eruption but this has been shown to be doubtful as the evidence of missing forum statues and marble wall-veneers are most likely due to robbers after the city's burial.[34][35] The public buildings on the east side of the forum were largely restored and were even enhanced by beautiful marble veneers and other modifications to the architecture.[36]
|
61 |
+
|
62 |
+
Some buildings like the Central Baths were only started after the earthquake and were built to enhance the city with modern developments in their architecture, as had been done in Rome, in terms of wall-heating and window glass, and with well-lit spacious rooms. The new baths took over a whole insula by demolishing houses, which may have been made easier by the earthquake that had damaged these houses. This shows that the city was still flourishing rather than struggling to recover from the earthquake.[37]
|
63 |
+
|
64 |
+
In about 64, Nero and his wife Poppaea visited Pompeii and made gifts to the temple of Venus, probably when he performed in the theatre of Naples.[38]
|
65 |
+
|
66 |
+
By 79, Pompeii had a population of 20,000,[39] which had prospered from the region's renowned agricultural fertility and favourable location.
|
67 |
+
|
68 |
+
The eruption lasted for two days.[40] The first phase was of pumice rain (lapilli) lasting about 18 hours, allowing most inhabitants to escape. That only approximately 1,150 bodies[41] have so far been found on site seems to confirm this theory and most escapees probably managed to salvage some of their most valuable belongings; many skeletons were found with jewellery, coins and silverware.
|
69 |
+
|
70 |
+
At some time in the night or early the next day, pyroclastic flows began near the volcano, consisting of high speed, dense, and very hot ash clouds, knocking down wholly or partly all structures in their path, incinerating or suffocating the remaining population and altering the landscape, including the coastline. By evening of the second day, the eruption was over, leaving only haze in the atmosphere through which the sun shone weakly.
|
71 |
+
|
72 |
+
A multidisciplinary volcanological and bio-anthropological study[42] of the eruption products and victims, merged with numerical simulations and experiments, indicates that at Pompeii and surrounding towns heat was the main cause of death of people, previously believed to have died by ash suffocation. The results of the study, published in 2010, show that exposure to at least 250 °C (480 °F) hot pyroclastic flows at a distance of 10 kilometres (6 miles) from the vent was sufficient to cause instant death, even if people were sheltered within buildings. The people and buildings of Pompeii were covered in up to twelve different layers of tephra, in total up to 6 metres (19.7 ft) deep.
|
73 |
+
|
74 |
+
Pliny the Younger provided a first-hand account of the eruption of Mount Vesuvius from his position across the Bay of Naples at Misenum but written 25 years after the event.[43] His uncle, Pliny the Elder, with whom he had a close relationship, died while attempting to rescue stranded victims. As admiral of the fleet, Pliny the Elder had ordered the ships of the Imperial Navy stationed at Misenum to cross the bay to assist evacuation attempts. Volcanologists have recognised the importance of Pliny the Younger's account of the eruption by calling similar events "Plinian". It had long been thought that the eruption was an August event based on one version of the letter but another version[44] gives a date of the eruption as late as 23 November. A later date is consistent with a charcoal inscription at the site, discovered in 2018, which includes the date of 17 October and which must have been recently written.[45]
|
75 |
+
|
76 |
+
Clear support for an October/November eruption is found in the fact that people buried in the ash appear to have been wearing heavier clothing than the light summer clothes typical of August. The fresh fruit and vegetables in the shops are typical of October – and conversely the summer fruit typical of August was already being sold in dried, or conserved form. Nuts from chestnut trees were found at Oplontis which would not have been mature before mid-September.[46] Wine fermenting jars had been sealed, which would have happened around the end of October. Coins found in the purse of a woman buried in the ash include one with a 15th imperatorial acclamation among the emperor's titles. These coins could not have been minted before the second week of September.[44]
|
77 |
+
|
78 |
+
Titus appointed two ex-consuls to organise a relief effort, while donating large amounts of money from the imperial treasury to aid the victims of the volcano.[47] He visited Pompeii once after the eruption and again the following year[48] but no work was done on recovery.
|
79 |
+
|
80 |
+
Soon after the burial of the city, survivors and possibly thieves came to salvage valuables, including the marble statues from the forum and other precious materials from buildings. There is wide evidence of post-eruption disturbance, including holes made through walls. The city was not completely buried, and tops of larger buildings would have been above the ash making it obvious where to dig or salvage building material.[49] The robbers left traces of their passage, as in a house where modern archaeologists found a wall graffito saying "house dug".[50]
|
81 |
+
|
82 |
+
Over the following centuries, its name and location were forgotten, though it still appeared on the Tabula Peutingeriana of the 4th century. Further eruptions particularly in 471–473 and 512 covered the remains more deeply. The area became known as the La Civita (the city) due to the features in the ground.[51]
|
83 |
+
|
84 |
+
The next known date that any part was unearthed was in 1592, when architect Domenico Fontana while digging an underground aqueduct to the mills of Torre Annunziata ran into ancient walls covered with paintings and inscriptions. His aqueduct passed through and under a large part of the city[52] and would have had to pass though many buildings and foundations, as still can be seen in many places today, but he kept quiet and nothing more came of the discovery.
|
85 |
+
|
86 |
+
In 1689, Francesco Picchetti saw a wall inscription mentioning decurio Pompeiis ("town councillor of Pompeii"), but he associated it with a villa of Pompey. Franceso Bianchini pointed out the true meaning and he was supported by Giuseppe Macrini, who in 1693 excavated some walls and wrote that Pompeii lay beneath La Civita.[53]
|
87 |
+
|
88 |
+
Herculaneum itself was rediscovered in 1738 by workmen digging for the foundations of a summer palace for the King of Naples, Charles of Bourbon. Due to the spectacular quality of the finds, the Spanish military engineer Rocque Joaquin de Alcubierre made excavations to find further remains at the site of Pompeii in 1748, even if the city was not identified.[54] Charles of Bourbon took great interest in the finds, even after leaving to become king of Spain, because the display of antiquities reinforced the political and cultural prestige of Naples.[55] On 20 August 1763, an inscription [...] Rei Publicae Pompeianorum [...] was found and the city was identified as Pompeii.[56]
|
89 |
+
|
90 |
+
Karl Weber directed the first scientific excavations.[57] He was followed in 1764 by military engineer Franscisco la Vega, who was succeeded by his brother, Pietro, in 1804.[58]
|
91 |
+
|
92 |
+
There was much progress in exploration when the French occupied Naples in 1799 and ruled over Italy from 1806 to 1815. The land on which Pompeii lies was expropriated and up to 700 workers were used in the excavations. The excavated areas in the north and south were connected. Parts of the Via dell'Abbondanza were also exposed in west–east direction and for the first time an impression of the size and appearance of the ancient town could be appreciated. In the following years, the excavators struggled with lack of money and excavations progressed slowly, but with significant finds such as the houses of the Faun, of Menandro, of the Tragic Poet and of the Surgeon.
|
93 |
+
|
94 |
+
Giuseppe Fiorelli took charge of the excavations in 1863 and made greater progress.[59] During early excavations of the site, occasional voids in the ash layer had been found that contained human remains. It was Fiorelli who realised these were spaces left by the decomposed bodies and so devised the technique of injecting plaster into them to recreate the forms of Vesuvius's victims. This technique is still in use today, with a clear resin now used instead of plaster because it is more durable, and does not destroy the bones, allowing further analysis.[60]
|
95 |
+
|
96 |
+
Fiorelli also introduced scientific documentation. He divided the city into the present nine areas (regiones) and blocks (insulae) and numbered the entrances of the individual houses (domus), so that each is identified by these three numbers. Fiorelli also published the first periodical with excavation reports. Under Fiorelli's successors the entire west of the city was exposed.
|
97 |
+
|
98 |
+
In the 1920s, Amedeo Maiuri excavated for the first time in older layers than that of 79 AD in order to learn about the settlement history. Maiuri made the last excavations on a grand scale in the 1950s, and the area south of the Via dell'Abbondanza and the city wall was almost completely uncovered, but they were poorly documented scientifically. Preservation was haphazard and presents today's archaeologists with great difficulty. Questionable reconstruction was done in the 1980s and 1990s after the severe earthquake of 1980, which caused great destruction. Since then, except for targeted soundings and excavations, work was confined to the excavated areas. Further excavations on a large scale are not planned and today archaeologists try to reconstruct, to document and, above all, to stop the ever faster decay.
|
99 |
+
|
100 |
+
Under the 'Great Pompeii Project' over 2.5 km of ancient walls are being relieved of danger of collapse by treating the unexcavated areas behind the street fronts in order to increase drainage and reduce the pressure of ground water and earth on the walls, a problem especially in the rainy season. As of August 2019, these excavations have resumed on unexcavated areas of Regio V.[61]
|
101 |
+
|
102 |
+
Objects buried beneath Pompeii were well-preserved for almost 2,000 years as the lack of air and moisture allowed little to no deterioration. However, once exposed, Pompeii has been subject to both natural and man-made forces, which have rapidly increased deterioration.
|
103 |
+
|
104 |
+
Weathering, erosion, light exposure, water damage, poor methods of excavation and reconstruction, introduced plants and animals, tourism, vandalism and theft have all damaged the site in some way. The lack of adequate weather protection of all but the most interesting and important buildings has allowed original interior decoration to fade or be lost. Two-thirds of the city has been excavated, but the remnants of the city are rapidly deteriorating.[62]
|
105 |
+
|
106 |
+
Furthermore, during World War II many buildings were badly damaged or destroyed by bombs dropped in several raids by the Allied forces.[63]
|
107 |
+
|
108 |
+
The concern for conservation has continually troubled archaeologists. The ancient city was included in the 1996 World Monuments Watch by the World Monuments Fund, and again in 1998 and in 2000. In 1996 the organisation claimed that Pompeii "desperately need[ed] repair" and called for the drafting of a general plan of restoration and interpretation.[64] The organisation supported conservation at Pompeii with funding from American Express and the Samuel H. Kress Foundation.[65]
|
109 |
+
|
110 |
+
Today, funding is mostly directed into conservation of the site; however, due to the expanse of Pompeii and the scale of the problems, this is inadequate in halting the slow decay of the materials. A 2012 study recommended an improved strategy for interpretation and presentation of the site as a cost-effective method of improving its conservation and preservation in the short term.[66]
|
111 |
+
|
112 |
+
In June 2013, UNESCO declared: If restoration and preservation works "fail to deliver substantial progress in the next two years," Pompeii could be placed on the List of World Heritage in Danger.[67]
|
113 |
+
|
114 |
+
The 2,000-year-old Schola Armatorum ('House of the Gladiators') collapsed on 6 November 2010. The structure was not open to visitors, but the outside was visible to tourists. There was no immediate determination as to what caused the building to collapse, although reports suggested water infiltration following heavy rains might have been responsible. There has been fierce controversy after the collapse, with accusations of neglect.[68][69]
|
115 |
+
|
116 |
+
Under the Romans after the conquest by Sulla in 89 BC, Pompeii underwent a process of urban development which accelerated in the Augustan period from about 30 BC. New public buildings include the amphitheatre with palaestra or gymnasium with a central natatorium (cella natatoria) or swimming pool, two theatres, the Eumachia Building and at least four public baths. The amphitheatre has been cited by scholars as a model of sophisticated design, particularly in the area of crowd control.[70]
|
117 |
+
|
118 |
+
Other service buildings were the Macellum ("meat market"); the Pistrinum ("mill"); the Thermopolium (a fast food place that served hot and cold dishes and beverages), and cauponae ("cafes" or "dives" with a seedy reputation as hangouts for thieves and prostitutes). At least one building, the Lupanar, was dedicated to prostitution.[71] A large hotel or hospitium (of 1,000 square metres) was found at Murecine, a short distance from Pompeii, when the Naples-Salerno motorway was being built, and the Murecine Silver Treasure and the Tablets providing a unique record of business transactions were discovered.[72][73]
|
119 |
+
|
120 |
+
An aqueduct provided water to the public baths, to more than 25 street fountains, and to many private houses (domūs) and businesses. The aqueduct was a branch of the great Serino Aqueduct built to serve the other large towns in the Bay of Naples region and the important naval base at Misenum. The castellum aquae is well preserved, and includes many details of the distribution network and its controls.[74]
|
121 |
+
|
122 |
+
Modern archaeologists have excavated garden sites and urban domains to reveal the agricultural staples of Pompeii's economy. Pompeii was fortunate to have had fertile soil for crop cultivation. The soils surrounding Mount Vesuvius preceding its eruption have been revealed to have had good water-retention capabilities, implying productive agriculture. The Tyrrhenian Sea's airflow provided hydration to the soil despite the hot, dry climate.[75] Barley, wheat, and millet were all produced along with wine and olive oil, in abundance for export to other regions.[76]
|
123 |
+
|
124 |
+
Evidence of wine imported nationally from Pompeii in its most prosperous years can be found from recovered artefacts such as wine bottles in Rome.[76] For this reason, vineyards were of utmost importance to Pompeii's economy. Agricultural policymaker Columella suggested that each vineyard in Rome produced a quota of three cullei of wine per jugerum, otherwise the vineyard would be uprooted. The nutrient-rich lands near Pompeii were extremely efficient at this and were often able to exceed these requirements by a steep margin, therefore providing the incentive for local wineries to establish themselves.[76] While wine was exported for Pompeii's economy, the majority of the other agricultural goods were likely produced in quantities sufficient for the city's consumption.
|
125 |
+
|
126 |
+
Remains of large formations of constructed wineries were found in the Forum Boarium, covered by cemented casts from the eruption of Vesuvius.[76] It is speculated that these historical vineyards are strikingly similar in structure to the modern day vineyards across Italy.
|
127 |
+
|
128 |
+
Carbonised food plant remains, roots, seeds and pollens, have been found from gardens in Pompeii, Herculaneum, and from the Roman villa at Torre Annunziata. They revealed that emmer wheat, Italian millet, common millet, walnuts, pine nuts, chestnuts, hazel nuts, chickpeas, bitter vetch, broad beans, olives, figs, pears, onions, garlic, peaches, carob, grapes, and dates were consumed. All but the dates could have been produced locally.[77]
|
129 |
+
|
130 |
+
Town houses:
|
131 |
+
|
132 |
+
Exterior villas:
|
133 |
+
|
134 |
+
Other:
|
135 |
+
|
136 |
+
The discovery of erotic art in Pompeii and Herculaneum left the archaeologists with a dilemma stemming from the clash of cultures between the mores of sexuality in ancient Rome and in Counter-Reformation Europe. An unknown number of discoveries were hidden away again. A wall fresco depicting Priapus, the ancient god of sex and fertility, with his grotesquely enlarged penis, was covered with plaster. An older reproduction was locked away "out of prudishness" and opened only on request – and only rediscovered in 1998 due to rainfall.[78] In 2018, an ancient fresco depicting an erotic scene of "Leda and the Swan" was discovered at Pompeii.[79]
|
137 |
+
|
138 |
+
Many artefacts from the buried cities are preserved in the Naples National Archaeological Museum. In 1819, when King Francis visited the Pompeii exhibition there with his wife and daughter, he was so embarrassed by the erotic artwork that he had it locked away in a "secret cabinet" (gabinetto segreto), a gallery within the museum accessible only to "people of mature age and respected morals". Re-opened, closed, re-opened again and then closed again for nearly 100 years, the Naples "Secret Museum" was briefly made accessible again at the end of the 1960s (the time of the sexual revolution) and was finally re-opened for viewing in 2000. Minors are still allowed entry only in the presence of a guardian or with written permission.[80]
|
139 |
+
|
140 |
+
Pompeii has been a popular tourist destination for over 250 years;[81] it was on the Grand Tour. By 2008, it was attracting almost 2.6 million visitors per year, making it one of the most popular tourist sites in Italy.[82] It is part of a larger Vesuvius National Park and was declared a World Heritage Site by UNESCO in 1997. To combat problems associated with tourism, the governing body for Pompeii, the 'Soprintendenza Archeologica di Pompei', have begun issuing new tickets that allow tourists to visit cities such as Herculaneum and Stabiae as well as the Villa Poppaea, to encourage visitors to see these sites and reduce pressure on Pompeii.
|
141 |
+
|
142 |
+
Pompeii is a driving force behind the economy of the nearby town of Pompei. Many residents are employed in the tourism and hospitality industry, serving as taxi or bus drivers, waiters, or hotel staff.[citation needed]
|
143 |
+
|
144 |
+
Excavations at the site have generally ceased due to a moratorium imposed by the superintendent of the site, Professor Pietro Giovanni Guzzo. The site is generally less accessible to tourists than in the past, with less than a third of all buildings open in the 1960s being available for public viewing today.
|
145 |
+
|
146 |
+
The 1954 film, Journey to Italy, starring George Sanders and Ingrid Bergman, includes a scene at Pompeii in which they witness the excavation of a cast of a couple that perished in the eruption.
|
147 |
+
|
148 |
+
Pompeii was the setting for the British comedy television series Up Pompeii! and the movie of the series. Pompeii also featured in the second episode of the fourth season of revived BBC science fiction series Doctor Who, named "The Fires of Pompeii",[83] which featured Caecilius as a character.
|
149 |
+
|
150 |
+
In 1971, the rock band Pink Floyd filmed a live concert titled Pink Floyd: Live at Pompeii, in which they performed six songs in the ancient Roman amphitheatre in the city. The audience consisted only of the film's production crew and some local children.
|
151 |
+
|
152 |
+
Siouxsie and the Banshees wrote and recorded the punk-inflected dance song "Cities in Dust", which describes the disaster that befell Pompeii and Herculaneum in AD 79. The song appears on their album Tinderbox, released in 1985, on Polydor Records. The jacket of the single remix of the song features the plaster cast of the chained dog killed in Pompeii.
|
153 |
+
|
154 |
+
Pompeii is a novel written by Robert Harris (published in 2003) featuring the account of the aquarius's race to fix the broken aqueduct in the days leading up to the eruption of Vesuvius, inspired by actual events and people.
|
155 |
+
|
156 |
+
"Pompeii" is a song by the British band Bastille, released 24 February 2013. The lyrics refer to the city and the eruption of Mount Vesuvius.
|
157 |
+
|
158 |
+
Pompeii is a 2014 German-Canadian historical disaster film produced and directed by Paul W. S. Anderson.[84]
|
159 |
+
|
160 |
+
In 2016, 45 years after the Pink Floyd recordings, band guitarist David Gilmour returned to the Pompeii amphitheatre to perform a live concert for his Rattle That Lock Tour. This event was considered the first in the amphitheatre to feature an audience since the AD 79 eruption of Vesuvius.[85][86]
|
161 |
+
|
162 |
+
The Basilica
|
163 |
+
|
164 |
+
Fresco from the Villa dei Misteri
|
165 |
+
|
166 |
+
The Forum
|
167 |
+
|
168 |
+
The Temple of Apollo
|
169 |
+
|
170 |
+
The House of the Faun
|
171 |
+
|
172 |
+
The Forum
|
en/4722.html.txt
ADDED
@@ -0,0 +1,172 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Pompeii (/pɒmˈpeɪ(i)/, Latin: [pɔmˈpeːjjiː]) was an ancient city located in what is now the comune of Pompei near Naples in the Campania region of Italy. Pompeii, along with Herculaneum and many villas in the surrounding area (e.g. at Boscoreale, Stabiae), was buried under 4 to 6 m (13 to 20 ft) of volcanic ash and pumice in the eruption of Mount Vesuvius in AD 79.
|
4 |
+
|
5 |
+
Largely preserved under the ash, the excavated city offered a unique snapshot of Roman life, frozen at the moment it was buried,[1] and an extraordinarily detailed insight into the everyday life of its inhabitants, although much of the evidence was lost in the early excavations. It was a wealthy town, enjoying many fine public buildings and luxurious private houses with lavish decorations, furnishings and works of art which were the main attractions for the early excavators. Organic remains, including wooden objects and human bodies, were entombed in the ash and decayed leaving
|
6 |
+
voids which archaeologists found could be used as moulds to make plaster casts of unique and often gruesome figures in their final moments of life. The numerous graffiti carved on the walls and inside rooms provide a wealth of examples of the largely lost Vulgar Latin spoken colloquially at the time, contrasting with the formal language of the classical writers.
|
7 |
+
|
8 |
+
Pompeii is a UNESCO World Heritage Site and is one of the most popular tourist attractions in Italy, with approximately 2.5 million visitors annually.[2]
|
9 |
+
|
10 |
+
After many excavations prior to 1960 that had uncovered most of the city but left it in decay,[3] further major excavations were banned and instead they were limited to targeted, prioritised areas. In 2018, these led to new discoveries in some previously unexplored areas of the city.[4][5][6][7]
|
11 |
+
|
12 |
+
Pompeii (pronounced [pɔmˈpɛjjiː]) in Latin is a second declension plural noun (Pompeiī, -ōrum). According to Theodor Kraus, "The root of the word Pompeii would appear to be the Oscan word for the number five, pompe, which suggests that either the community consisted of five hamlets or perhaps it was settled by a family group (gens Pompeia)."[8]
|
13 |
+
|
14 |
+
Pompeii was built about 40 metres (130 ft) above sea level on a coastal lava plateau created by earlier eruptions of Mount Vesuvius, (8 km (5.0 mi) distant). The plateau fell steeply to the south and partly the west and into the sea. Three sheets of sediment from large landslides lie on top of the lava, perhaps triggered by extended rainfall.[9] The city bordered the coastline, though today it is 700 metres (2,300 ft) away. The mouth of the navigable Sarno River, adjacent to the city, was protected by lagoons and served early Greek and Phoenician sailors as a safe haven and port which was developed further by the Romans.
|
15 |
+
|
16 |
+
Pompeii covered a total of 64 to 67 hectares (160 to 170 acres) and was home to 11,000 to 11,500 people, based on household counts.[10]
|
17 |
+
|
18 |
+
Although best known for its Roman remains visible today dating from AD 79, it was built upon a substantial city dating from much earlier times. Expansion of the city from an early nucleus (the old town) accelerated already from 450 BC under the Greeks after the battle of Cumae.[11]
|
19 |
+
|
20 |
+
The first stable settlements on the site date back to the 8th century BC when the Oscans,[12] a people of central Italy, founded five villages in the area.
|
21 |
+
|
22 |
+
With the arrival of the Greeks in Campania from around 740 BC, Pompeii entered the orbit of the Hellenic people and the most important building of this period is the Doric Temple, built away from the centre in what would later become the Triangular Forum.[13]:62 At the same time the cult of Apollo was introduced.[14] Greek and Phoenician sailors used the location as a safe port.
|
23 |
+
|
24 |
+
In the early 6th century BC, the settlement merged into a single community centred on the important crossroad between Cumae, Nola, and Stabiae and was surrounded by a tufa city wall (the pappamonte wall).[15][16] The first wall (which was also used as a base for the later wall) unusually enclosed a much greater area than the early town together with much agricultural land.[17] That such an impressive wall was built at this time indicates that the settlement was already important and wealthy. The city began to flourish and maritime trade started with the construction of a small port near the mouth of the river.[13] The earliest settlement was focused in regions VII and VIII of the town (the old town) as identified from stratigraphy below the Samnite and Roman buildings, as well as from the different and irregular street plan.
|
25 |
+
|
26 |
+
In 524 BC, the Etruscans arrived and settled in the area, including Pompeii, finding in the River Sarno a communication route between the sea and the interior. Like the Greeks, the Etruscans did not conquer the city militarily, but simply controlled it and Pompeii enjoyed a sort of autonomy.[13]:63 Nevertheless, Pompeii became a member of the Etruscan League of cities.[18] Excavations in 1980–1981 have shown the presence of Etruscan inscriptions and a 6th-century BC necropolis.[19] Under the Etruscans a primitive forum or simple market square was built, as well as the Temple of Apollo, in both of which objects including fragments of bucchero were found by Maiuri.[20] Several houses were built with the so-called Tuscan atrium, typical of this people.[13]:64
|
27 |
+
|
28 |
+
The city wall was strengthened in the early-5th century BC with two façades of relatively thin, vertically set, slabs of Sarno limestone some four metres apart filled with earth (the orthostate wall).[21]
|
29 |
+
|
30 |
+
In 474 BC the Greek city of Cumae, allied with Syracuse, defeated the Etruscans at the Battle of Cumae and gained control of the area.
|
31 |
+
|
32 |
+
The period between about 450–375 BC witnessed large areas of the city being abandoned while important sanctuaries such as the Temple of Apollo show a sudden lack of votive material remains.[22]
|
33 |
+
|
34 |
+
The Samnites, people from the areas of Abruzzo and Molise, and allies of the Romans, conquered Greek Cumae between 423 and 420 BC and it is likely that all the surrounding territory, including Pompeii, was already conquered around 424 BC. The new rulers gradually imposed their architecture and enlarged the town.
|
35 |
+
|
36 |
+
From 343–341 BC in the Samnite Wars, the first Roman army entered the Campanian plain bringing with it the customs and traditions of Rome, and in the Roman Latin War from 340 BC the Samnites were faithful to Rome. Pompeii, although governed by the Samnites, entered the Roman orbit, to which it remained faithful even during the third Samnite war and in the war against Pyrrhus. In the late 4th century BC the city began to expand from its nucleus and into the open walled area. The street plan of the new areas was more regular and more conformal to Hippodamus's street plan. The city walls were reinforced in Sarno stone in the early 3rd century BC (the limestone enceinte, or the "first Samnite wall"). It formed the basis for the currently visible walls with an outer wall of rectangular limestone blocks as a terrace wall supporting a large agger, or earth embankment, behind it.
|
37 |
+
|
38 |
+
After the Samnite Wars from 290 BC, Pompeii was forced to accept the status of socii of Rome, maintaining, however, linguistic and administrative autonomy.
|
39 |
+
|
40 |
+
From the outbreak of the Second Punic War (218–201 BC) in which Pompeii remained faithful to Rome, an additional internal wall was built of tufa and the internal agger and outer façade raised resulting in a double parapet with wider wall-walk.[13] Despite the political uncertainty of these events and the progressive migration of wealthy men to quieter cities in the eastern Mediterranean, Pompeii continued to flourish due to the production and trade of wine and oil with places like Provence and Spain,[23] as well as to intensive agriculture on farms around the city.
|
41 |
+
|
42 |
+
In the 2nd century BC, Pompeii enriched itself by taking part in Rome's conquest of the east as shown by a statue of Apollo in the Forum erected by Lucius Mummius in gratitude for their support in the sack of Corinth and the eastern campaigns. These riches enabled Pompeii to bloom and expand to its ultimate limits. The forum and many public and private buildings of high architectural quality were built, including Teatro Grande, the Temple of Jupiter, the Basilica, the Comitium, the Stabian Baths and a new two-story portico.[24]
|
43 |
+
|
44 |
+
Pompeii was one of the towns of Campania that rebelled against Rome in the Social Wars and in 89 BC it was besieged by Sulla, who targeted the strategically vulnerable Porta Ercolano with his artillery as can still be seen by the impact craters of thousands of ballista shots in the walls. Many nearby buildings inside the walls were also destroyed.[25] Although the battle-hardened troops of the Social League, headed by Lucius Cluentius, helped in resisting the Romans, Pompeii was forced to surrender after the conquest of Nola.
|
45 |
+
|
46 |
+
The result was that Pompeii became a Roman colony with the name of Colonia Cornelia Veneria Pompeianorum. Many of Sulla's veterans were given land and property in and around the city, while many of those who opposed Rome were dispossessed of their property. Despite this, the Pompeians were granted Roman citizenship and they were quickly assimilated into the Roman world. The main language in the city became Latin,[26] and many of Pompeii's old aristocratic families Latinized their names as a sign of assimilation.[27]
|
47 |
+
|
48 |
+
The city became an important passage for goods that arrived by sea and had to be sent toward Rome or Southern Italy along the nearby Appian Way. Many public buildings were built or refurbished and improved under the new order; new buildings included the Amphitheatre of Pompeii in 70 BC, the Forum Baths, and the Odeon, while the forum was embellished with the colonnade of Popidius before 80 BC.[28] These buildings raised the status of Pompeii as a cultural centre in the region as it outshone its neighbours in the number of places for entertainment which significantly enhanced the social and economic development of the city.
|
49 |
+
|
50 |
+
Under Augustus, from about 30 BC a major expansion in new public buildings, as in the rest of the empire, included the Eumachia Building, the Sanctuary of Augustus and the Macellum. From about 20 BC, Pompeii was fed with running water by a spur from the Serino Aqueduct, built by Marcus Vipsanius Agrippa.
|
51 |
+
|
52 |
+
In AD 59, there was a serious riot and bloodshed in the amphitheatre between Pompeians and Nucerians (which is recorded in a fresco) and which led the Roman senate to send the Praetorian Guard to restore order and to ban further events for a period of ten years.[29][30]
|
53 |
+
|
54 |
+
The inhabitants of Pompeii had long been used to minor earthquakes (indeed, the writer Pliny the Younger wrote that earth tremors "were not particularly alarming because they are frequent in Campania"), but on 5 February 62[31] a severe earthquake did considerable damage around the bay, and particularly to Pompeii. It is believed that the earthquake would have registered between about 5 and 6 on the Richter magnitude scale.[32]
|
55 |
+
|
56 |
+
On that day in Pompeii, there were to be two sacrifices, as it was the anniversary of Augustus being named "Father of the Nation" and also a feast day to honour the guardian spirits of the city. Chaos followed the earthquake; fires caused by oil lamps that had fallen during the quake added to the panic. The nearby cities of Herculaneum and Nuceria were also affected.[32]
|
57 |
+
|
58 |
+
Between 62 and the eruption in 79 most rebuilding was done in the private sector and older, damaged frescoes were often covered with newer ones, for example. In the public sector the opportunity was taken to improve buildings and the city plan e.g. in the forum.[33]
|
59 |
+
|
60 |
+
An important field of current research concerns structures that were restored between the earthquake of 62 and the eruption. It was thought until recently that some of the damage had still not been repaired at the time of the eruption but this has been shown to be doubtful as the evidence of missing forum statues and marble wall-veneers are most likely due to robbers after the city's burial.[34][35] The public buildings on the east side of the forum were largely restored and were even enhanced by beautiful marble veneers and other modifications to the architecture.[36]
|
61 |
+
|
62 |
+
Some buildings like the Central Baths were only started after the earthquake and were built to enhance the city with modern developments in their architecture, as had been done in Rome, in terms of wall-heating and window glass, and with well-lit spacious rooms. The new baths took over a whole insula by demolishing houses, which may have been made easier by the earthquake that had damaged these houses. This shows that the city was still flourishing rather than struggling to recover from the earthquake.[37]
|
63 |
+
|
64 |
+
In about 64, Nero and his wife Poppaea visited Pompeii and made gifts to the temple of Venus, probably when he performed in the theatre of Naples.[38]
|
65 |
+
|
66 |
+
By 79, Pompeii had a population of 20,000,[39] which had prospered from the region's renowned agricultural fertility and favourable location.
|
67 |
+
|
68 |
+
The eruption lasted for two days.[40] The first phase was of pumice rain (lapilli) lasting about 18 hours, allowing most inhabitants to escape. That only approximately 1,150 bodies[41] have so far been found on site seems to confirm this theory and most escapees probably managed to salvage some of their most valuable belongings; many skeletons were found with jewellery, coins and silverware.
|
69 |
+
|
70 |
+
At some time in the night or early the next day, pyroclastic flows began near the volcano, consisting of high speed, dense, and very hot ash clouds, knocking down wholly or partly all structures in their path, incinerating or suffocating the remaining population and altering the landscape, including the coastline. By evening of the second day, the eruption was over, leaving only haze in the atmosphere through which the sun shone weakly.
|
71 |
+
|
72 |
+
A multidisciplinary volcanological and bio-anthropological study[42] of the eruption products and victims, merged with numerical simulations and experiments, indicates that at Pompeii and surrounding towns heat was the main cause of death of people, previously believed to have died by ash suffocation. The results of the study, published in 2010, show that exposure to at least 250 °C (480 °F) hot pyroclastic flows at a distance of 10 kilometres (6 miles) from the vent was sufficient to cause instant death, even if people were sheltered within buildings. The people and buildings of Pompeii were covered in up to twelve different layers of tephra, in total up to 6 metres (19.7 ft) deep.
|
73 |
+
|
74 |
+
Pliny the Younger provided a first-hand account of the eruption of Mount Vesuvius from his position across the Bay of Naples at Misenum but written 25 years after the event.[43] His uncle, Pliny the Elder, with whom he had a close relationship, died while attempting to rescue stranded victims. As admiral of the fleet, Pliny the Elder had ordered the ships of the Imperial Navy stationed at Misenum to cross the bay to assist evacuation attempts. Volcanologists have recognised the importance of Pliny the Younger's account of the eruption by calling similar events "Plinian". It had long been thought that the eruption was an August event based on one version of the letter but another version[44] gives a date of the eruption as late as 23 November. A later date is consistent with a charcoal inscription at the site, discovered in 2018, which includes the date of 17 October and which must have been recently written.[45]
|
75 |
+
|
76 |
+
Clear support for an October/November eruption is found in the fact that people buried in the ash appear to have been wearing heavier clothing than the light summer clothes typical of August. The fresh fruit and vegetables in the shops are typical of October – and conversely the summer fruit typical of August was already being sold in dried, or conserved form. Nuts from chestnut trees were found at Oplontis which would not have been mature before mid-September.[46] Wine fermenting jars had been sealed, which would have happened around the end of October. Coins found in the purse of a woman buried in the ash include one with a 15th imperatorial acclamation among the emperor's titles. These coins could not have been minted before the second week of September.[44]
|
77 |
+
|
78 |
+
Titus appointed two ex-consuls to organise a relief effort, while donating large amounts of money from the imperial treasury to aid the victims of the volcano.[47] He visited Pompeii once after the eruption and again the following year[48] but no work was done on recovery.
|
79 |
+
|
80 |
+
Soon after the burial of the city, survivors and possibly thieves came to salvage valuables, including the marble statues from the forum and other precious materials from buildings. There is wide evidence of post-eruption disturbance, including holes made through walls. The city was not completely buried, and tops of larger buildings would have been above the ash making it obvious where to dig or salvage building material.[49] The robbers left traces of their passage, as in a house where modern archaeologists found a wall graffito saying "house dug".[50]
|
81 |
+
|
82 |
+
Over the following centuries, its name and location were forgotten, though it still appeared on the Tabula Peutingeriana of the 4th century. Further eruptions particularly in 471–473 and 512 covered the remains more deeply. The area became known as the La Civita (the city) due to the features in the ground.[51]
|
83 |
+
|
84 |
+
The next known date that any part was unearthed was in 1592, when architect Domenico Fontana while digging an underground aqueduct to the mills of Torre Annunziata ran into ancient walls covered with paintings and inscriptions. His aqueduct passed through and under a large part of the city[52] and would have had to pass though many buildings and foundations, as still can be seen in many places today, but he kept quiet and nothing more came of the discovery.
|
85 |
+
|
86 |
+
In 1689, Francesco Picchetti saw a wall inscription mentioning decurio Pompeiis ("town councillor of Pompeii"), but he associated it with a villa of Pompey. Franceso Bianchini pointed out the true meaning and he was supported by Giuseppe Macrini, who in 1693 excavated some walls and wrote that Pompeii lay beneath La Civita.[53]
|
87 |
+
|
88 |
+
Herculaneum itself was rediscovered in 1738 by workmen digging for the foundations of a summer palace for the King of Naples, Charles of Bourbon. Due to the spectacular quality of the finds, the Spanish military engineer Rocque Joaquin de Alcubierre made excavations to find further remains at the site of Pompeii in 1748, even if the city was not identified.[54] Charles of Bourbon took great interest in the finds, even after leaving to become king of Spain, because the display of antiquities reinforced the political and cultural prestige of Naples.[55] On 20 August 1763, an inscription [...] Rei Publicae Pompeianorum [...] was found and the city was identified as Pompeii.[56]
|
89 |
+
|
90 |
+
Karl Weber directed the first scientific excavations.[57] He was followed in 1764 by military engineer Franscisco la Vega, who was succeeded by his brother, Pietro, in 1804.[58]
|
91 |
+
|
92 |
+
There was much progress in exploration when the French occupied Naples in 1799 and ruled over Italy from 1806 to 1815. The land on which Pompeii lies was expropriated and up to 700 workers were used in the excavations. The excavated areas in the north and south were connected. Parts of the Via dell'Abbondanza were also exposed in west–east direction and for the first time an impression of the size and appearance of the ancient town could be appreciated. In the following years, the excavators struggled with lack of money and excavations progressed slowly, but with significant finds such as the houses of the Faun, of Menandro, of the Tragic Poet and of the Surgeon.
|
93 |
+
|
94 |
+
Giuseppe Fiorelli took charge of the excavations in 1863 and made greater progress.[59] During early excavations of the site, occasional voids in the ash layer had been found that contained human remains. It was Fiorelli who realised these were spaces left by the decomposed bodies and so devised the technique of injecting plaster into them to recreate the forms of Vesuvius's victims. This technique is still in use today, with a clear resin now used instead of plaster because it is more durable, and does not destroy the bones, allowing further analysis.[60]
|
95 |
+
|
96 |
+
Fiorelli also introduced scientific documentation. He divided the city into the present nine areas (regiones) and blocks (insulae) and numbered the entrances of the individual houses (domus), so that each is identified by these three numbers. Fiorelli also published the first periodical with excavation reports. Under Fiorelli's successors the entire west of the city was exposed.
|
97 |
+
|
98 |
+
In the 1920s, Amedeo Maiuri excavated for the first time in older layers than that of 79 AD in order to learn about the settlement history. Maiuri made the last excavations on a grand scale in the 1950s, and the area south of the Via dell'Abbondanza and the city wall was almost completely uncovered, but they were poorly documented scientifically. Preservation was haphazard and presents today's archaeologists with great difficulty. Questionable reconstruction was done in the 1980s and 1990s after the severe earthquake of 1980, which caused great destruction. Since then, except for targeted soundings and excavations, work was confined to the excavated areas. Further excavations on a large scale are not planned and today archaeologists try to reconstruct, to document and, above all, to stop the ever faster decay.
|
99 |
+
|
100 |
+
Under the 'Great Pompeii Project' over 2.5 km of ancient walls are being relieved of danger of collapse by treating the unexcavated areas behind the street fronts in order to increase drainage and reduce the pressure of ground water and earth on the walls, a problem especially in the rainy season. As of August 2019, these excavations have resumed on unexcavated areas of Regio V.[61]
|
101 |
+
|
102 |
+
Objects buried beneath Pompeii were well-preserved for almost 2,000 years as the lack of air and moisture allowed little to no deterioration. However, once exposed, Pompeii has been subject to both natural and man-made forces, which have rapidly increased deterioration.
|
103 |
+
|
104 |
+
Weathering, erosion, light exposure, water damage, poor methods of excavation and reconstruction, introduced plants and animals, tourism, vandalism and theft have all damaged the site in some way. The lack of adequate weather protection of all but the most interesting and important buildings has allowed original interior decoration to fade or be lost. Two-thirds of the city has been excavated, but the remnants of the city are rapidly deteriorating.[62]
|
105 |
+
|
106 |
+
Furthermore, during World War II many buildings were badly damaged or destroyed by bombs dropped in several raids by the Allied forces.[63]
|
107 |
+
|
108 |
+
The concern for conservation has continually troubled archaeologists. The ancient city was included in the 1996 World Monuments Watch by the World Monuments Fund, and again in 1998 and in 2000. In 1996 the organisation claimed that Pompeii "desperately need[ed] repair" and called for the drafting of a general plan of restoration and interpretation.[64] The organisation supported conservation at Pompeii with funding from American Express and the Samuel H. Kress Foundation.[65]
|
109 |
+
|
110 |
+
Today, funding is mostly directed into conservation of the site; however, due to the expanse of Pompeii and the scale of the problems, this is inadequate in halting the slow decay of the materials. A 2012 study recommended an improved strategy for interpretation and presentation of the site as a cost-effective method of improving its conservation and preservation in the short term.[66]
|
111 |
+
|
112 |
+
In June 2013, UNESCO declared: If restoration and preservation works "fail to deliver substantial progress in the next two years," Pompeii could be placed on the List of World Heritage in Danger.[67]
|
113 |
+
|
114 |
+
The 2,000-year-old Schola Armatorum ('House of the Gladiators') collapsed on 6 November 2010. The structure was not open to visitors, but the outside was visible to tourists. There was no immediate determination as to what caused the building to collapse, although reports suggested water infiltration following heavy rains might have been responsible. There has been fierce controversy after the collapse, with accusations of neglect.[68][69]
|
115 |
+
|
116 |
+
Under the Romans after the conquest by Sulla in 89 BC, Pompeii underwent a process of urban development which accelerated in the Augustan period from about 30 BC. New public buildings include the amphitheatre with palaestra or gymnasium with a central natatorium (cella natatoria) or swimming pool, two theatres, the Eumachia Building and at least four public baths. The amphitheatre has been cited by scholars as a model of sophisticated design, particularly in the area of crowd control.[70]
|
117 |
+
|
118 |
+
Other service buildings were the Macellum ("meat market"); the Pistrinum ("mill"); the Thermopolium (a fast food place that served hot and cold dishes and beverages), and cauponae ("cafes" or "dives" with a seedy reputation as hangouts for thieves and prostitutes). At least one building, the Lupanar, was dedicated to prostitution.[71] A large hotel or hospitium (of 1,000 square metres) was found at Murecine, a short distance from Pompeii, when the Naples-Salerno motorway was being built, and the Murecine Silver Treasure and the Tablets providing a unique record of business transactions were discovered.[72][73]
|
119 |
+
|
120 |
+
An aqueduct provided water to the public baths, to more than 25 street fountains, and to many private houses (domūs) and businesses. The aqueduct was a branch of the great Serino Aqueduct built to serve the other large towns in the Bay of Naples region and the important naval base at Misenum. The castellum aquae is well preserved, and includes many details of the distribution network and its controls.[74]
|
121 |
+
|
122 |
+
Modern archaeologists have excavated garden sites and urban domains to reveal the agricultural staples of Pompeii's economy. Pompeii was fortunate to have had fertile soil for crop cultivation. The soils surrounding Mount Vesuvius preceding its eruption have been revealed to have had good water-retention capabilities, implying productive agriculture. The Tyrrhenian Sea's airflow provided hydration to the soil despite the hot, dry climate.[75] Barley, wheat, and millet were all produced along with wine and olive oil, in abundance for export to other regions.[76]
|
123 |
+
|
124 |
+
Evidence of wine imported nationally from Pompeii in its most prosperous years can be found from recovered artefacts such as wine bottles in Rome.[76] For this reason, vineyards were of utmost importance to Pompeii's economy. Agricultural policymaker Columella suggested that each vineyard in Rome produced a quota of three cullei of wine per jugerum, otherwise the vineyard would be uprooted. The nutrient-rich lands near Pompeii were extremely efficient at this and were often able to exceed these requirements by a steep margin, therefore providing the incentive for local wineries to establish themselves.[76] While wine was exported for Pompeii's economy, the majority of the other agricultural goods were likely produced in quantities sufficient for the city's consumption.
|
125 |
+
|
126 |
+
Remains of large formations of constructed wineries were found in the Forum Boarium, covered by cemented casts from the eruption of Vesuvius.[76] It is speculated that these historical vineyards are strikingly similar in structure to the modern day vineyards across Italy.
|
127 |
+
|
128 |
+
Carbonised food plant remains, roots, seeds and pollens, have been found from gardens in Pompeii, Herculaneum, and from the Roman villa at Torre Annunziata. They revealed that emmer wheat, Italian millet, common millet, walnuts, pine nuts, chestnuts, hazel nuts, chickpeas, bitter vetch, broad beans, olives, figs, pears, onions, garlic, peaches, carob, grapes, and dates were consumed. All but the dates could have been produced locally.[77]
|
129 |
+
|
130 |
+
Town houses:
|
131 |
+
|
132 |
+
Exterior villas:
|
133 |
+
|
134 |
+
Other:
|
135 |
+
|
136 |
+
The discovery of erotic art in Pompeii and Herculaneum left the archaeologists with a dilemma stemming from the clash of cultures between the mores of sexuality in ancient Rome and in Counter-Reformation Europe. An unknown number of discoveries were hidden away again. A wall fresco depicting Priapus, the ancient god of sex and fertility, with his grotesquely enlarged penis, was covered with plaster. An older reproduction was locked away "out of prudishness" and opened only on request – and only rediscovered in 1998 due to rainfall.[78] In 2018, an ancient fresco depicting an erotic scene of "Leda and the Swan" was discovered at Pompeii.[79]
|
137 |
+
|
138 |
+
Many artefacts from the buried cities are preserved in the Naples National Archaeological Museum. In 1819, when King Francis visited the Pompeii exhibition there with his wife and daughter, he was so embarrassed by the erotic artwork that he had it locked away in a "secret cabinet" (gabinetto segreto), a gallery within the museum accessible only to "people of mature age and respected morals". Re-opened, closed, re-opened again and then closed again for nearly 100 years, the Naples "Secret Museum" was briefly made accessible again at the end of the 1960s (the time of the sexual revolution) and was finally re-opened for viewing in 2000. Minors are still allowed entry only in the presence of a guardian or with written permission.[80]
|
139 |
+
|
140 |
+
Pompeii has been a popular tourist destination for over 250 years;[81] it was on the Grand Tour. By 2008, it was attracting almost 2.6 million visitors per year, making it one of the most popular tourist sites in Italy.[82] It is part of a larger Vesuvius National Park and was declared a World Heritage Site by UNESCO in 1997. To combat problems associated with tourism, the governing body for Pompeii, the 'Soprintendenza Archeologica di Pompei', have begun issuing new tickets that allow tourists to visit cities such as Herculaneum and Stabiae as well as the Villa Poppaea, to encourage visitors to see these sites and reduce pressure on Pompeii.
|
141 |
+
|
142 |
+
Pompeii is a driving force behind the economy of the nearby town of Pompei. Many residents are employed in the tourism and hospitality industry, serving as taxi or bus drivers, waiters, or hotel staff.[citation needed]
|
143 |
+
|
144 |
+
Excavations at the site have generally ceased due to a moratorium imposed by the superintendent of the site, Professor Pietro Giovanni Guzzo. The site is generally less accessible to tourists than in the past, with less than a third of all buildings open in the 1960s being available for public viewing today.
|
145 |
+
|
146 |
+
The 1954 film, Journey to Italy, starring George Sanders and Ingrid Bergman, includes a scene at Pompeii in which they witness the excavation of a cast of a couple that perished in the eruption.
|
147 |
+
|
148 |
+
Pompeii was the setting for the British comedy television series Up Pompeii! and the movie of the series. Pompeii also featured in the second episode of the fourth season of revived BBC science fiction series Doctor Who, named "The Fires of Pompeii",[83] which featured Caecilius as a character.
|
149 |
+
|
150 |
+
In 1971, the rock band Pink Floyd filmed a live concert titled Pink Floyd: Live at Pompeii, in which they performed six songs in the ancient Roman amphitheatre in the city. The audience consisted only of the film's production crew and some local children.
|
151 |
+
|
152 |
+
Siouxsie and the Banshees wrote and recorded the punk-inflected dance song "Cities in Dust", which describes the disaster that befell Pompeii and Herculaneum in AD 79. The song appears on their album Tinderbox, released in 1985, on Polydor Records. The jacket of the single remix of the song features the plaster cast of the chained dog killed in Pompeii.
|
153 |
+
|
154 |
+
Pompeii is a novel written by Robert Harris (published in 2003) featuring the account of the aquarius's race to fix the broken aqueduct in the days leading up to the eruption of Vesuvius, inspired by actual events and people.
|
155 |
+
|
156 |
+
"Pompeii" is a song by the British band Bastille, released 24 February 2013. The lyrics refer to the city and the eruption of Mount Vesuvius.
|
157 |
+
|
158 |
+
Pompeii is a 2014 German-Canadian historical disaster film produced and directed by Paul W. S. Anderson.[84]
|
159 |
+
|
160 |
+
In 2016, 45 years after the Pink Floyd recordings, band guitarist David Gilmour returned to the Pompeii amphitheatre to perform a live concert for his Rattle That Lock Tour. This event was considered the first in the amphitheatre to feature an audience since the AD 79 eruption of Vesuvius.[85][86]
|
161 |
+
|
162 |
+
The Basilica
|
163 |
+
|
164 |
+
Fresco from the Villa dei Misteri
|
165 |
+
|
166 |
+
The Forum
|
167 |
+
|
168 |
+
The Temple of Apollo
|
169 |
+
|
170 |
+
The House of the Faun
|
171 |
+
|
172 |
+
The Forum
|
en/4723.html.txt
ADDED
@@ -0,0 +1,230 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
A firefighter (also known as a fireman or firewoman) is a rescuer extensively trained in firefighting, primarily to extinguish hazardous fires that threaten life, property, and the environment as well as to rescue people and animals from dangerous situations.
|
2 |
+
|
3 |
+
The complexity of modern, industrialized life has created an increase in the skills needed in firefighting technology. The fire service, also known in some countries as the fire brigade or fire department, is one of the three main emergency services. From urban areas to aboard ships, firefighters have become ubiquitous around the world.
|
4 |
+
|
5 |
+
The skills required for safe operations are regularly practiced during training evaluations throughout a firefighter's career. Initial firefighting skills are normally taught through local, regional or state-approved fire academies or training courses.[1] Depending on the requirements of a department, additional skills and certifications such as technical rescue and pre-hospital medicine may also be acquired at this time.
|
6 |
+
|
7 |
+
Firefighters work closely with other emergency response agencies such as the police and emergency medical service. A firefighter's role may overlap with both. Fire investigators or fire marshals investigate the cause of a fire. If the fire was caused by arson or negligence, their work will overlap with law enforcement. Firefighters also frequently provide some degree of emergency medical service, in addition to working with full-time paramedics.
|
8 |
+
|
9 |
+
The basic tasks of firefighters include: fire suppression, rescue, fire prevention, basic first aid, and investigations. Firefighting is further broken down into skills which include: size-up, extinguishing, ventilation, search and rescue, salvage, containment, mop up and overhaul.
|
10 |
+
|
11 |
+
A fire burns due to the presence of three elements: fuel, oxygen and heat. This is often referred to as the fire triangle. Sometimes it is known as the fire tetrahedron if a fourth element is added: a chemical chain reaction which can help sustain certain types of fire. The aim of firefighting is to deprive the fire of at least one of those elements. Most commonly this is done by dousing the fire with water, though some fires require other methods such as foam or dry agents. Firefighters are equipped with a wide variety of equipment for this purpose that include: ladder trucks, pumper trucks, tanker trucks, fire hose, and fire extinguishers.
|
12 |
+
|
13 |
+
While sometimes fires can be limited to small areas of a structure, wider collateral damage due to smoke, water and burning embers is common. Utility shutoff (such as gas and electricity) is typically an early priority for arriving fire crews. In addition, forcible entry may be required in order to gain access into the structure. Specific procedures and equipment are needed at a property where hazardous materials are being used or stored.
|
14 |
+
|
15 |
+
Structure fires may be attacked with either "interior" or "exterior" resources, or both. Interior crews, using the "two in, two out" rule, may extend fire hose lines inside the building, find the fire and cool it with water. Exterior crews may direct water into windows and other openings, or against any nearby fuels exposed to the initial fire. Hose streams directed into the interior through exterior wall apertures may conflict and jeopardize interior fire attack crews.
|
16 |
+
|
17 |
+
Buildings that are made of flammable materials such as wood are different from building materials such as concrete. Generally, a "fire-resistant" building is designed to limit fire to a small area or floor. Other floors can be safe by preventing smoke inhalation and damage. All buildings suspected or on fire must be evacuated, regardless of fire rating.
|
18 |
+
|
19 |
+
Some fire fighting tactics may appear to be destructive, but often serve specific needs. For example, during ventilation, firefighters are forced to either open holes in the roof or floors of a structure (called vertical ventilation), or open windows and walls (called horizontal ventilation) to remove smoke and heated gases from the interior of the structure. Such ventilation methods are also used to improve interior visibility to locate victims more quickly. Ventilation helps to preserve the life of trapped or unconscious individuals as it releases the poisonous gases from inside the structure. Vertical ventilation is vital to firefighter safety in the event of a flashover or backdraft scenario. Releasing the flammable gases through the roof eliminates the possibility of a backdraft, and the removal of heat can reduce the possibility of a flashover. Flashovers, due to their intense heat (900–1200° Fahrenheit) and explosive temperaments, are commonly fatal to firefighter personnel. Precautionary methods, such as smashing a window, reveal backdraft situations before the firefighter enters the structure and is met with the circumstance head-on. Firefighter safety is the number one priority.
|
20 |
+
|
21 |
+
Whenever possible during a structure fire, property is moved into the middle of a room and covered with a salvage cover, a heavy cloth-like tarp. Various steps such as retrieving and protecting valuables found during suppression or overhaul, evacuating water, and boarding windows and roofs can divert or prevent post-fire runoff.
|
22 |
+
|
23 |
+
Wildfires (known in Australia as bushfires) require a unique set of strategies and tactics. In many countries such as Australia and the United States, these duties are mostly carried out by local volunteer firefighters. Wildfires have some ecological role in allowing new plants to grow, therefore in some cases they will be left to burn.[2] Priorities in fighting wildfires include preventing the loss of life and property.
|
24 |
+
|
25 |
+
Firefighters rescue people (and animals) from dangerous situations such as crashed vehicles, structural collapses, trench collapses, cave and tunnel emergencies, water and ice emergencies, elevator emergencies, energized electrical line emergencies, and industrial accidents.[3] In less common circumstances, Firefighters rescue victims from hazardous materials emergencies as well as steep cliffs, embankment and high rises - The latter is referred to as High Angle Rescue, or Rope Rescue. Many fire departments, including most in the United Kingdom, refer to themselves as a fire and rescue service for this reason. Large fire departments, such as the New York City Fire Department and London Fire Brigade, have specialist teams for advanced technical rescue. As building fires have been in decline for many years in developed countries such as the United States, rescues other than fires make up an increasing proportion of their firefighters' work.[4]
|
26 |
+
|
27 |
+
Firefighters frequently provide some degree of emergency medical care. In some jurisdictions first aid is the only medical training that firefighters have, and medical-only calls are the sole responsibility of a separate emergency medical services (EMS) agency. Elsewhere, it is common for firefighters to respond to medical-only calls. The impetus for this is the growing demand for emergency medicine and the decline of fires and traditional firefighting call-outs[4]—though fire departments still have to be able to respond to them—and their existing ability to respond rapidly to emergencies. A rapid response is particularly necessary for cardiac arrests, as these will lead to death if not treated within minutes.[5]
|
28 |
+
|
29 |
+
The dispatch of firefighters to medical emergencies is particularly common in fire departments that run the EMS, including most large cities of the United States. In those departments, firefighters are often jointly trained as emergency medical technicians in order to deliver basic life support, and more rarely as paramedics to deliver advanced life support. In the United Kingdom, where fire services and EMS are run separately, fire service co-responding has been introduced more recently.[6] Another point of variation is whether the firefighters respond in a fire engine or a response car.[7] Either way, separate employees to crew ambulances are still needed, unless the firefighters can work shifts on the ambulances.
|
30 |
+
|
31 |
+
Airports employ specialist firefighters to deal with potential ground emergencies. Due to the mass casualty potential of an aviation emergency, the speed with which emergency response equipment and personnel arrive at the scene of the emergency is of paramount importance. When dealing with an emergency, the airport firefighters are tasked with rapidly securing the aircraft, its crew and its passengers from all hazards, particularly fire. Airport firefighters have advanced training in the application of firefighting foams, dry chemical and clean agents used to extinguish burning aviation fuel.
|
32 |
+
|
33 |
+
Fire departments are usually the primary agency that responds to an emergency involving hazardous materials. Specialized firefighters, known as hazardous materials technicians, have training and certification in chemical identification, leak control, decontamination, and clean-up procedures.
|
34 |
+
|
35 |
+
Fire departments frequently provide advice to the public on how to prevent fires in the home and work-place environments. Fire inspectors or fire marshals will directly inspect businesses to ensure they are up to the current building fire codes,[8][9] which are enforced so that a building can sufficiently resist fire spread, potential hazards are located, and to ensure that occupants can be safely evacuated, commensurate with the risks involved.
|
36 |
+
|
37 |
+
Fire suppression systems have a proven record for controlling and extinguishing unwanted fires. Many fire officials recommend that every building, including residences, have fire sprinkler systems.[10] Correctly working sprinklers in a residence greatly reduce the risk of death from a fire.[11] With the small rooms typical of a residence, one or two sprinklers can cover most rooms. In the United States, the housing industry trade groups have lobbied at the State level to prevent the requirement for Fire Sprinklers in 1 and 2 bedroom homes.[12][13]
|
38 |
+
|
39 |
+
Other methods of fire prevention are by directing efforts to reduce known hazardous conditions or by preventing dangerous acts before tragedy strikes. This is normally accomplished in many innovative ways such as conducting presentations, distributing safety brochures, providing news articles, writing public safety announcements (PSA) or establishing meaningful displays in well-visited areas. Ensuring that each household has working smoke alarms, is educated in the proper techniques of fire safety, has an evacuation route and rendezvous point is of top priority in public education for most fire prevention teams in almost all fire department localities.
|
40 |
+
|
41 |
+
Fire investigators, who are experienced firefighters trained in fire cause determinism, are dispatched to fire scenes, in order to investigate and determine whether the fire was a result of an accident or intentional. Some fire investigators have full law enforcement powers to investigate and arrest suspected arsonists.
|
42 |
+
|
43 |
+
To allow protection from the inherent risks of fighting fires, firefighters wear and carry protective and self-rescue equipment at all times. A self-contained breathing apparatus (SCBA) delivers air to the firefighter through a full face mask and is worn to protect against smoke inhalation, toxic fumes, and super heated gases. A special device called a Personal Alert Safety System (PASS) is commonly worn independently or as a part of the SCBA to alert others when a firefighter stops moving for a specified period of time or manually operates the device. The PASS device sounds an alarm that can assist another firefighter (firefighter assist and search team (FAST), or rapid intervention team (RIT), in locating the firefighter in distress.
|
44 |
+
|
45 |
+
Firefighters often carry personal self-rescue ropes. The ropes are generally 30 feet long and can provide a firefighter (that has enough time to deploy the rope) a partially controlled exit out of an elevated window. Lack of a personal rescue rope is cited in the deaths of two New York City Firefighters, Lt. John Bellew and Lt. Curtis Meyran, who died after they jumped from the fourth floor of a burning apartment building in the Bronx. Of the four firefighters who jumped and survived, only one of them had a self-rescue rope. Since the incident, the Fire Department of New York City has issued self-rescue ropes to their firefighters.[14]
|
46 |
+
|
47 |
+
Heat injury is a major issue for firefighters as they wear insulated clothing and cannot shed the heat generated from physical exertion. Early detection of heat issues is critical to stop dehydration and heat stress becoming fatal. Early onset of heat stress affects cognitive function which combined with operating in dangerous environment makes heat stress and dehydration a critical issue to monitor. Firefighter physiological status monitoring is showing promise in alerting EMS and commanders to the status of their people on the fire ground. Devices such as PASS device alert 10–20 seconds after a firefighter has stopped moving in a structure. Physiological status monitors measure a firefighter's vital sign status, fatigue and exertion levels and transmit this information over their voice radio. This technology allows a degree of early warning to physiological stress. These devices[15] are similar to technology developed for Future Force Warrior and give a measure of exertion and fatigue. They also tell the people outside a building when they have stopped moving or fallen. This allows a supervisor to call in additional engines before the crew get exhausted and also gives an early warning to firefighters before they run out of air, as they may not be able to make voice calls over their radio. Current OSHA tables exist for heat injury and the allowable amount of work in a given environment based on temperature, humidity and solar loading.[16]
|
48 |
+
|
49 |
+
Firefighters are also at risk for developing rhabdomyolysis. Rhabdomyolysis is the breakdown of muscle tissue and has many causes including heat exposure, high core body temperature, and prolonged, intense exertion. Routine firefighter tasks, such as carrying extra weight of equipment and working in hot environments, can increase firefighters’ risk for rhabdomyolysis.[17][18]
|
50 |
+
|
51 |
+
Another leading cause of death during firefighting is structural collapse of a burning building (e.g. a wall, floor, ceiling, roof, or truss system). Structural collapse, which often occurs without warning, may crush or trap firefighters inside the structure. To avoid loss of life, all on-duty firefighters should maintain two-way communication with the incident commander and be equipped with a personal alert safety system device on all fire scenes and maintain radio communication on all incidents(PASS).[19][20] Francis Brannigan was the founder and greatest contributor to this element of firefighter safety.
|
52 |
+
|
53 |
+
In the United States, 25% of fatalities of firefighters are caused by traffic collisions while responding to or returning from an incident. Other firefighters have been injured or killed by vehicles at the scene of a fire or emergency (Paulison 2005). A common measure fire departments have taken to prevent this is to require firefighters to wear a bright yellow reflective vest over their turnout coats if they have to work on a public road, to make them more visible to passing drivers.[21] In addition to the direct dangers of firefighting, cardiovascular diseases account for approximately 45% of on duty firefighter deaths.[22]
|
54 |
+
|
55 |
+
Firefighters have sometimes been assaulted by members of the public while responding to calls. These kinds of attacks can cause firefighters to fear for their safety and may cause them to not have full focus on the situation which could result in injury to their selves or the patient.[citation needed]
|
56 |
+
|
57 |
+
Once extinguished, fire debris cleanup poses several safety and health risks for workers.[23][24]
|
58 |
+
|
59 |
+
Many hazardous substances are commonly found in fire debris. Silica can be found in concrete, roofing tiles, or it may be a naturally occurring element. Occupational exposures to silica dust can cause silicosis, lung cancer, pulmonary tuberculosis, airway diseases, and some additional non-respiratory diseases.[25] Inhalation of asbestos can result in various diseases including asbestosis, lung cancer, and mesothelioma.[26] Sources of metals exposure include burnt or melted electronics, cars, refrigerators, stoves, etc. Fire debris cleanup workers may be exposed to these metals or their combustion products in the air or on their skin. These metals may include beryllium, cadmium, chromium, cobalt, lead, manganese, nickel, and many more.[23] Polyaromatic hydrocarbons (PAHs), some of which are carcinogenic, come from the incomplete combustion of organic materials and are often found as a result of structural and wildland fires.[27]
|
60 |
+
|
61 |
+
Safety hazards of fire cleanup include the risk of reignition of smoldering debris, electrocution from downed or exposed electrical lines or in instances where water has come into contact with electrical equipment. Structures that have been burned may be unstable and at risk of sudden collapse.[24][28]
|
62 |
+
|
63 |
+
Standard personal protective equipment for fire cleanup include hard hats, goggles or safety glasses, heavy work gloves, earplugs or other hearing protection, steel-toe boots, and fall protection devices.[28][29] Hazard controls for electrical injury include assuming all power lines are energized until confirmation they are de-energized, and grounding power lines to guard against electrical feedback, and using appropriate personal protective equipment.[28] Proper respiratory protection can protect against hazardous substances. Proper ventilation of an area is an engineering control that can be used to avoid or minimize exposure to hazardous substances. When ventilation is insufficient or dust cannot be avoided, personal protective equipment such as N95 respirators can be used.[28][30]
|
64 |
+
|
65 |
+
Firefighting has long been associated with poor cardiovascular outcomes. In the United States, the most common cause of on-duty fatalities for firefighters is sudden cardiac death. In addition to personal factors that may predispose an individual to coronary artery disease or other cardiovascular diseases, occupational exposures can significantly increase a firefighter's risk. Historically, the fire service blamed poor firefighter physical condition for being the primary cause of cardiovascular related deaths. However, over the last 20 years, studies and research has indicated the toxic gasses put fire service personnel at significantly higher risk for cardiovascular related conditions and death. For instance, carbon monoxide, present in nearly all fire environments, and hydrogen cyanide, formed during the combustion of paper, cotton, plastics, and other substances containing carbon and nitrogen. The substances inside of materials change during combustion their bi-products interfere with the transport of oxygen in the body. Hypoxia can then lead to heart injury. In addition, chronic exposure to particulate matter in smoke is associated with atherosclerosis. Noise exposures may contribute to hypertension and possibly ischemic heart disease. Other factors associated with firefighting, such as stress, heat stress, and heavy physical exertion, also increase the risk of cardiovascular events.[31]
|
66 |
+
|
67 |
+
During fire suppression activities a firefighter can reach peak or near peak heart rates which can act as a trigger for a cardiac event. For example, tachycardia can cause plaque buildup to break loose and lodge itself is a small part of the heart causing myocardial infarction, also known as a heart attack. This along with unhealthy habits and lack of exercise can be very hazardous to firefighter health.[32]
|
68 |
+
|
69 |
+
A 2015 retrospective longitudinal study showed that firefighters are at higher risk for certain types of cancer. Firefighters had mesothelioma, which is caused by asbestos exposure, at twice the rate of the non-firefighting working population. Younger firefighters (under age 65) also developed bladder cancer and prostate cancer at higher rates than the general population. The risk of bladder cancer may be present in female firefighters, but research is inconclusive as of 2014.[33][34] Preliminary research from 2015 on a large cohort of US firefighters showed a direct relationship between the number of hours spent fighting fires and lung cancer and leukemia mortality in firefighters. This link is a topic of continuing research in the medical community, as is cancer mortality in general among firefighters.[35]
|
70 |
+
|
71 |
+
Firefighters are exposed to a variety of carcinogens at fires, including both carcinogenic chemicals and radiation (alpha radiation, beta radiation, and gamma radiation).[36]
|
72 |
+
|
73 |
+
As with other emergency workers, firefighters may witness traumatic scenes during their careers. They are thus more vulnerable than most people to certain mental health issues such as post-traumatic stress disorder[37][38] and suicidal thoughts and behaviors.[39][40] Among women in the US, the occupations with the highest suicide rates are police and firefighters, with a rate of 14.1 per 100 000, according to the National Center for Injury Prevention and Control, CDC.[41] Chronic stress over time attributes to symptoms that affect first responders, such as anxiousness, irritability, nervousness, memory and concentration problems can occur overtime which can lead to anxiety and depression. Mental stress can have long lasting affects on the brain.[42] A 2014 report from the National Fallen Firefighters Foundation found that a fire department is three times more likely to experience a suicide in a given year than a line-of-duty death.[43] Mental stress of the job can lead to substance abuse and alcohol abuse as ways of coping with the stress.[44] The mental stress of fire fighting has a lot of different causes. There are those they see on duty and also what they miss by being on duty. Firefighters schedules fluctuate by district. There are stations where fire fighters work 48 hours on and 48 hours off. Some allow 24 hours on and 72 hours off[45] . The mental impact of missing your child's first steps or a ballet recital can take a heavy impact on first responders. There is also the stress of being on opposite shifts as your spouse or being away from family.
|
74 |
+
|
75 |
+
Another long-term risk factor from firefighting is exposure to high levels of sound, which can cause noise-induced hearing loss (NIHL) and tinnitus.[46][47] NIHL affects sound frequencies between 3,000 and 6,000 Hertz first, then with more frequent exposure, will spread to more frequencies.[47] Many consonants will be more difficult to hear or inaudible with NIHL because of the higher frequencies effected, which results in poorer communication.[47] NIHL is caused by exposure to sound levels at or above 85dBA according to NIOSH and at or above 90dBA according to OSHA.[47] dBA represents A-weighted decibels. dBA is used for measuring sound levels relating to occupational sound exposure since it attempts to mimic the sensitivity of the human ear to different frequencies of sound.[47] OSHA uses a 5-dBA exchange rate, which means that for every 5dBA increase in sound from 90dBA, the acceptable exposure time before a risk of permanent hearing loss occurs decreases by half (starting with 8 hours acceptable exposure time at 90dBA).[47][48] NIOSH uses a 3-dBA exchange rate starting at 8 hours acceptable exposure time at 85dBA.[47][49]
|
76 |
+
|
77 |
+
The time of exposure required to potentially cause damage depends on the level of sound exposed to.[49] The most common causes of excessive sound exposure are sirens, transportation to and from fires, fire alarms, and work tools.[46] Traveling in an emergency vehicle has shown to expose a person to between 103 and 114dBA of sound. According to OSHA, exposure at this level is acceptable for between 17 and 78 minutes[48] and according to NIOSH is acceptable for between 35 seconds and 7.5 minutes [49] over a 24-hour day before permanent hearing loss can occur. This time period considers that no other high level sound exposure occurs in that 24-hour time frame.[49] Sirens often output about 120 dBA, which according to OSHA, 7.5 minutes of exposure is needed[48] and according to NIOSH, 9 seconds of exposure is needed[49] in a 24-hour time period before permanent hearing loss can occur. In addition to high sound levels, another risk factor for hearing disorders is the co-exposure to chemicals that are ototoxic.[50]
|
78 |
+
|
79 |
+
The average day of work for a firefighter can often be under the sound exposure limit for both OSHA and NIOSH.[47] While the average day of sound exposure as a firefighter is often under the limit, firefighters can be exposed to impulse noise, which has a very low acceptable time exposure before permanent hearing damage can occur due to the high intensity and short duration.[46]
|
80 |
+
|
81 |
+
There are also high rates of hearing loss, often NIHL, in firefighters, which increases with age and number of years working as a firefighter.[46][51] Hearing loss prevention programs have been implemented in multiple stations and have shown to help lower the rate of firefighters with NIHL.[47] Other attempts have been made to lower sound exposures for firefighters, such as enclosing the cabs of the firetrucks to lower the siren exposure while driving.[47] NFPA (National Fire Protection Association) is responsible for occupational health programs and standards in firefighters which discusses what hearing sensitivity is required to work as a firefighter, but also enforces baseline (initial) and annual hearing tests (based on OSHA hearing maintenance regulations).[46] While NIHL can be a risk that occurs from working as a firefighter, NIHL can also be a safety concern for communicating while doing the job as communicating with coworkers and victims is essential for safety.[46] Hearing protection devices have been used by firefighters in the United States.[47] Earmuffs are the most commonly used hearing protection device (HPD) as they are the most easy to put on correctly in a quick manner.[47] Multiple fire departments have used HPDs that have communication devices built in, allowing firefighters to speak with each other at safe, but audible sound levels, while lowering the hazardous sound levels around them.[47]
|
82 |
+
|
83 |
+
In a country with a comprehensive fire service, fire departments must be able to send firefighters to emergencies at any hour of day or night, to arrive on the scene within minutes. In urban areas, this means that full-time paid firefighters usually have shift work, with some providing cover each night. On the other hand, it may not be practical to employ full-time firefighters in villages and isolated small towns, where their services may not be required for days at a time. For this reason, many fire departments have firefighters who spend long periods on call to respond to infrequent emergencies; they may have regular jobs outside of firefighting. Whether they are paid or not varies by country. In the United States and Germany, volunteer fire departments provide most of the cover in rural areas. In the United Kingdom and Ireland, by contrast, actual volunteers are rare. Instead, "retained firefighters" are paid for responding to incidents, along with a small salary for spending long periods of time on call.
|
84 |
+
|
85 |
+
A key difference between many country's fire services is what the balance is between full-time and volunteer (or on-call) firefighters. In the United States and United Kingdom, large metropolitan fire departments are almost entirely made up of full-time firefighters. On the other hand, in Germany and Austria,[52] volunteers play a substantial role even in the largest fire departments, including Berlin's, which serves a population of 3.6 million. Regardless of how this balance works, a common feature is that smaller urban areas have a mix of full-time and volunteer/on-call firefighters. This is known in the United States as a combination fire department. In Chile and Peru, all firefighters are volunteers.[53]
|
86 |
+
|
87 |
+
Another point of variation is how the fire services are organized. Some countries like Israel and New Zealand have a single national fire service. Others like Australia, the United Kingdom and France organize fire services based on regions or sub-national states. In the United States, Germany and Canada, fire departments are run at a municipal level.
|
88 |
+
|
89 |
+
Atypically, Singapore and many parts of Switzerland have fire service conscription.[54][55] In Germany, conscription can also be used if a village does not have a functioning fire service. Other unusual arrangements are seen in France, where two of the country's fire services (the Paris Fire Brigade and the Marseille Naval Fire Battalion) are part of the armed forces, and Denmark, where most fire services are run by private companies.[56]
|
90 |
+
|
91 |
+
Another way in which a firefighter's work varies around the world is the nature of firefighting equipment and tactics. For example, American fire departments make heavier use of aerial appliances, and are often split between engine and ladder companies. In Europe, where the size and usefulness of aerial appliances are often limited by narrow streets, they are only used for rescues, and firefighters can rotate between working on an engine and an aerial appliance.
|
92 |
+
[57][56] A final point in variation is how involved firefighters are in emergency medical services.
|
93 |
+
|
94 |
+
The expedient and accurate handling of fire alarms or calls are significant factors in the successful outcome of any incident. Fire department communications play a critical role in that successful outcome. Fire department communications include the methods by which the public can notify the communications center of an emergency, the methods by which the center can notify the proper fire fighting forces, and the methods by which information is exchanged at the scene. One method is to use a megaphone to communicate.
|
95 |
+
|
96 |
+
A telecommunicator (often referred to as a 000 Operator)[citation needed] has a role different from but just as important as other emergency personnel. The telecommunicator must process calls from unknown and unseen individuals, usually calling under stressful conditions. He/she must be able to obtain complete, reliable information from the caller and prioritize requests for assistance. It is the dispatcher's responsibility to bring order to chaos.
|
97 |
+
|
98 |
+
While some fire departments are large enough to utilize their own telecommunication dispatcher, most rural and small areas rely on a central dispatcher to provide handling of fire, rescue, and police services.
|
99 |
+
|
100 |
+
Firefighters are trained to use communications equipment to receive alarms, give and receive commands, request assistance, and report on conditions. Since firefighters from different agencies routinely provide mutual aid to each other, and routinely operate at incidents where other emergency services are present, it is essential to have structures in place to establish a unified chain of command, and share information between agencies. The U.S. Federal Emergency Management Agency (FEMA) has established a National Incident Management System.[58] One component of this system is the Incident Command System.
|
101 |
+
|
102 |
+
All radio communication in the United States is under authorization from the Federal Communications Commission (FCC); as such, fire departments that operate radio equipment must have radio licenses from the FCC.
|
103 |
+
|
104 |
+
Ten codes were popular in the early days of radio equipment because of poor transmission and reception. Advances in modern radio technology have reduced the need for ten-codes and many departments have converted to simple English (clear text).
|
105 |
+
|
106 |
+
Many firefighters are sworn members with command structures similar to the military and police. They do not have general police powers (some firefighters in the United States have limited police powers, like fire police departments, while certain fire marshals have full police powers, i.e. the ability to make warrantless arrests, and authority to carry a firearm on and off-duty), but have specific powers of enforcement and control in fire and emergency situations.
|
107 |
+
|
108 |
+
The basic unit of an American fire department is a "company", a group of firefighters who typically work on the same engine. A "crew" or "platoon" is a subdivision of a company who work on the same shift. Commonwealth fire services are more likely to be organized around a "watch", who work the same shift on multiple engines.[59]
|
109 |
+
|
110 |
+
New South Wales Rural Fire Service
|
111 |
+
|
112 |
+
New rank structure of 2015.
|
113 |
+
|
114 |
+
Ranks amongst Canadian firefighters vary across the country and ranking appears mostly with larger departments:
|
115 |
+
|
116 |
+
Toronto
|
117 |
+
|
118 |
+
Montreal
|
119 |
+
|
120 |
+
Vancouver
|
121 |
+
|
122 |
+
Ranks are divided between Company Officers and Fire Department Officers, which can be subdivided between Active Officers (Field Officers) and Administrative Officers each. The active officers are the captain, and three or four lieutenants, these four active officers are distinguished by red lines on their helmets.
|
123 |
+
|
124 |
+
Most fire brigades in Commonwealth countries (except Canada) have a more "civilianised" nomenclature, structured in a traditional manner. For example, the common structure in United Kingdom brigades is:
|
125 |
+
|
126 |
+
French civilian fire services, which historically are derived from French army sapper units, use French Army ranks. The highest rank in many departments is full colonel. Only the NCO rank of major is used in both the Paris Fire Brigade and the Marseille Naval Fire Battalion; since 2013 it has been abolished in the other fire departments.
|
127 |
+
|
128 |
+
In Germany every federal state has its own civil protection laws thus they have different rank systems. Additionally, in the volunteer fire departments, there is a difference between a rank and an official position. This is founded on the military traditions of the fire departments. Every firefighter can hold a high rank without having an official position. A firefighter can be promoted by years of service, training skills and qualifications. Official positions are partly elected or given by capabilities. These conditions allow that older ordinary firefighters have higher ranks than their leaders. But through this ranks are no authorities given (Brevet).
|
129 |
+
|
130 |
+
Completed vocational training in a technical occupation suitable for the fire service. Basic firefighter training.
|
131 |
+
|
132 |
+
Bachelor of Engineering and two years departmental training.
|
133 |
+
|
134 |
+
Master of Engineering and two years of departmental training.
|
135 |
+
|
136 |
+
Firefighters in Indonesia form part of the civil service of local governments and wear variant forms of uniforms worn by civil servants and employees.
|
137 |
+
|
138 |
+
The Vigili del Fuoco, (literally the word "Vigili" comes from the Latin word "Vigiles", which means "who is part of certain guards") have the official name of Corpo nazionale dei vigili del fuoco (CNVVF, National Firefighters Corps).
|
139 |
+
|
140 |
+
The CNVVF is the Italian institutional agency for fire and rescue service. It is part of the Ministry of Interior's Department of Firefighters, Public Rescue and Public Protection. The CNVVF task is to provide safety for people, animals and property, and control the compliance of buildings and industries to fire safety rules. The Ministry of the Interior, through the CNVVF, adopts fire safety rules with ministerial decrees or other lower rank documents. The CNVVF also ensures public rescue in emergencies that involves the use of chemical weapons, bacteriological, radiological and materials. Since 2012 the Corps uses its own rank titles (dating from 2007) with matching military styled insignia in honor of its origins.
|
141 |
+
|
142 |
+
In 2016 the CNVVF has been committed in forest firefighting activities together with the regional forest agencies, following the suppression of the National Forest Guards, which were merged into the Carabinieri (firefighters were integrated into the CNVVF).
|
143 |
+
.
|
144 |
+
|
145 |
+
In Iran, every city has its own fire department, but ranks are the same in the whole country, and are as follows:
|
146 |
+
|
147 |
+
In Ireland, the traditional brigade rank structure is still adopted. Below is the common structure for most brigades, Cork and Dublin Fire Brigade have additional ranks:
|
148 |
+
|
149 |
+
Japanese Fire Department's rank insignias are place on a small badge and pinned above the right pocket. Rank is told by stripes and Hexagram stars. The design of the insignias came from older Japanese style military insignias. Officers and Team Leaders could wear an arm band on the arm of fire jacket to show status as command leader. Sometimes rank can be shown as different color fire jacket for command staff. The color whites and gray are reserved for EMS. Orange is reserved for rescuer.
|
150 |
+
|
151 |
+
Grand-Ducal Fire and Rescue Corps of Luxembourg.
|
152 |
+
|
153 |
+
Aspirant Brigadier
|
154 |
+
|
155 |
+
Brigadier
|
156 |
+
|
157 |
+
Corporal
|
158 |
+
|
159 |
+
Chief Corporal
|
160 |
+
|
161 |
+
Chief Corporal 1st Class
|
162 |
+
|
163 |
+
Sergeant
|
164 |
+
|
165 |
+
Chief Sergeant
|
166 |
+
|
167 |
+
Sergeant major
|
168 |
+
|
169 |
+
Aspirant Adjutant
|
170 |
+
|
171 |
+
Adjutant
|
172 |
+
|
173 |
+
Chief Adjutant
|
174 |
+
|
175 |
+
Adjutant major
|
176 |
+
|
177 |
+
Aspirant Lieutenant
|
178 |
+
|
179 |
+
Lieutenant
|
180 |
+
|
181 |
+
Lieutenant 1st Class
|
182 |
+
|
183 |
+
Captain
|
184 |
+
|
185 |
+
Major
|
186 |
+
|
187 |
+
Lieutenant Colonel
|
188 |
+
|
189 |
+
Colonel
|
190 |
+
|
191 |
+
Director General
|
192 |
+
|
193 |
+
In New Zealand, rank is shown on epaulettes on firefighters' station uniform, and through colors and stripes on firefighter helmets. As the nation only has a single fire department, the New Zealand Fire Service, ranks are consistent through the country.
|
194 |
+
|
195 |
+
In the Russian Federation, the decals are applied symmetrically on both sides of the helmet (front and rear). The location of the decals on the special clothing and SCBA is established for each fire department of the same type within the territorial entity. The following ranks are used by State Fire Service civilian personnel, while military personnel use ranks similar to those of the Police of Russia, due to their pre-2001 history as the fire service of the Ministry of Internal Affairs of the Russian Federation before all firefighting services were transferred to the Ministry of Emergency Situations.
|
196 |
+
|
197 |
+
Tunisian firefighter's ranks are actually the same as the army, police and national garde.
|
198 |
+
|
199 |
+
In the United States, helmet colors often denote a fire fighter's rank or position. In general, white helmets denote chief officers, while red helmets may denote company officers, but the specific meaning of a helmet's color or style varies from region to region and department to department. The rank of an officer in an American fire department is most commonly denoted by a number of speaking trumpets, a reference to a megaphone-like device used in the early days of the fire service, although typically called "bugle" in today's parlance. Ranks proceed from one (lieutenant) to five (fire chief) bugles. Traditional ranks in American fire departments that exist but may not always be utilized in all cities or towns include:
|
200 |
+
|
201 |
+
Chief/Commissioner
|
202 |
+
|
203 |
+
In many fire departments in the U.S., the captain is commonly the commander of a company and a lieutenant is the supervisor of the company's firefighters on shift. There is no state or federal rank structure for firefighters and each municipality or volunteer fire department creates and uses their own unique structure.
|
204 |
+
|
205 |
+
Still, some other American fire departments such the FDNY use military rank insignia in addition or instead of the traditional bugles. Additionally, officers on truck companies have been known to use rank insignias shaped like axes for Lieutenants (1) and Captains (2).
|
206 |
+
|
207 |
+
Turkish firefighters in MOPP 4 level protective gear during an exercise held at Incirlik Air Base, Turkey
|
208 |
+
|
209 |
+
Toronto firefighters prepare their equipment
|
210 |
+
|
211 |
+
A firefighter using a hydraulic cutter during a demonstration
|
212 |
+
|
213 |
+
British naval men in firefighting gear on HMS Illustrious (R06), Liverpool, 25 October 2009
|
214 |
+
|
215 |
+
A partial list of some equipment typically used by firefighters:
|
216 |
+
|
217 |
+
Although people have fought fires since there have been valuable things to burn, the first instance of organized professionals combating structural fires occurred in
|
218 |
+
ancient Egypt. Likewise, fire fighters of the Roman Republic existed solely as privately organized and funded groups that operated more similarly to a business than a public service; however, during the Principate period, Augustus revolutionized firefighting by calling for the creation of a fire guard that was trained, paid, and equipped by the state, thereby commissioning the first truly public and professional firefighting service. Known as the Vigiles, they were organised into cohorts, serving as a night watch and city police force.
|
219 |
+
|
220 |
+
The earliest American fire departments were volunteers, including the volunteer fire company in New Amsterdam, now known as New York.[62] Fire companies were composed of citizens who volunteered their time to help protect the community. As time progressed and new towns were established throughout the region, there was a sharp increase in the number of volunteer departments.
|
221 |
+
|
222 |
+
In 1853, the first career fire department in the United States was established in Cincinnati, Ohio, followed four years later by St. Louis Fire Department. Large cities began establishing paid, full-time staff in order to try facilitate greater call volume.
|
223 |
+
|
224 |
+
City fire departments draw their funding directly from city taxes and share the same budget as other public works like the police department and trash services. The primary difference between municipality departments and city departments is the funding source. Municipal fire departments do not share their budget with any other service and are considered to be private entities within a jurisdiction. This means that they have their own taxes that feed into their budgeting needs. City fire departments report to the mayor, whereas municipal departments are accountable to elected board officials who help maintain and run the department along with the chief officer staff.[citation needed]
|
225 |
+
|
226 |
+
Funds for firefighting equipment may be raised by the firefighters themselves, especially in the case of volunteer organizations. Events such as pancake breakfasts and chili feeds are common in the United States. Social events are used to raise money include dances, fairs, and car washes.
|
227 |
+
|
228 |
+
Media related to Firefighter at Wikimedia Commons
|
229 |
+
|
230 |
+
Fact Sheet for Firefighters and EMS providers regarding risks for exposure to COVID-19, Centers for Disease Control and Prevention.
|
en/4724.html.txt
ADDED
@@ -0,0 +1,230 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
A firefighter (also known as a fireman or firewoman) is a rescuer extensively trained in firefighting, primarily to extinguish hazardous fires that threaten life, property, and the environment as well as to rescue people and animals from dangerous situations.
|
2 |
+
|
3 |
+
The complexity of modern, industrialized life has created an increase in the skills needed in firefighting technology. The fire service, also known in some countries as the fire brigade or fire department, is one of the three main emergency services. From urban areas to aboard ships, firefighters have become ubiquitous around the world.
|
4 |
+
|
5 |
+
The skills required for safe operations are regularly practiced during training evaluations throughout a firefighter's career. Initial firefighting skills are normally taught through local, regional or state-approved fire academies or training courses.[1] Depending on the requirements of a department, additional skills and certifications such as technical rescue and pre-hospital medicine may also be acquired at this time.
|
6 |
+
|
7 |
+
Firefighters work closely with other emergency response agencies such as the police and emergency medical service. A firefighter's role may overlap with both. Fire investigators or fire marshals investigate the cause of a fire. If the fire was caused by arson or negligence, their work will overlap with law enforcement. Firefighters also frequently provide some degree of emergency medical service, in addition to working with full-time paramedics.
|
8 |
+
|
9 |
+
The basic tasks of firefighters include: fire suppression, rescue, fire prevention, basic first aid, and investigations. Firefighting is further broken down into skills which include: size-up, extinguishing, ventilation, search and rescue, salvage, containment, mop up and overhaul.
|
10 |
+
|
11 |
+
A fire burns due to the presence of three elements: fuel, oxygen and heat. This is often referred to as the fire triangle. Sometimes it is known as the fire tetrahedron if a fourth element is added: a chemical chain reaction which can help sustain certain types of fire. The aim of firefighting is to deprive the fire of at least one of those elements. Most commonly this is done by dousing the fire with water, though some fires require other methods such as foam or dry agents. Firefighters are equipped with a wide variety of equipment for this purpose that include: ladder trucks, pumper trucks, tanker trucks, fire hose, and fire extinguishers.
|
12 |
+
|
13 |
+
While sometimes fires can be limited to small areas of a structure, wider collateral damage due to smoke, water and burning embers is common. Utility shutoff (such as gas and electricity) is typically an early priority for arriving fire crews. In addition, forcible entry may be required in order to gain access into the structure. Specific procedures and equipment are needed at a property where hazardous materials are being used or stored.
|
14 |
+
|
15 |
+
Structure fires may be attacked with either "interior" or "exterior" resources, or both. Interior crews, using the "two in, two out" rule, may extend fire hose lines inside the building, find the fire and cool it with water. Exterior crews may direct water into windows and other openings, or against any nearby fuels exposed to the initial fire. Hose streams directed into the interior through exterior wall apertures may conflict and jeopardize interior fire attack crews.
|
16 |
+
|
17 |
+
Buildings that are made of flammable materials such as wood are different from building materials such as concrete. Generally, a "fire-resistant" building is designed to limit fire to a small area or floor. Other floors can be safe by preventing smoke inhalation and damage. All buildings suspected or on fire must be evacuated, regardless of fire rating.
|
18 |
+
|
19 |
+
Some fire fighting tactics may appear to be destructive, but often serve specific needs. For example, during ventilation, firefighters are forced to either open holes in the roof or floors of a structure (called vertical ventilation), or open windows and walls (called horizontal ventilation) to remove smoke and heated gases from the interior of the structure. Such ventilation methods are also used to improve interior visibility to locate victims more quickly. Ventilation helps to preserve the life of trapped or unconscious individuals as it releases the poisonous gases from inside the structure. Vertical ventilation is vital to firefighter safety in the event of a flashover or backdraft scenario. Releasing the flammable gases through the roof eliminates the possibility of a backdraft, and the removal of heat can reduce the possibility of a flashover. Flashovers, due to their intense heat (900–1200° Fahrenheit) and explosive temperaments, are commonly fatal to firefighter personnel. Precautionary methods, such as smashing a window, reveal backdraft situations before the firefighter enters the structure and is met with the circumstance head-on. Firefighter safety is the number one priority.
|
20 |
+
|
21 |
+
Whenever possible during a structure fire, property is moved into the middle of a room and covered with a salvage cover, a heavy cloth-like tarp. Various steps such as retrieving and protecting valuables found during suppression or overhaul, evacuating water, and boarding windows and roofs can divert or prevent post-fire runoff.
|
22 |
+
|
23 |
+
Wildfires (known in Australia as bushfires) require a unique set of strategies and tactics. In many countries such as Australia and the United States, these duties are mostly carried out by local volunteer firefighters. Wildfires have some ecological role in allowing new plants to grow, therefore in some cases they will be left to burn.[2] Priorities in fighting wildfires include preventing the loss of life and property.
|
24 |
+
|
25 |
+
Firefighters rescue people (and animals) from dangerous situations such as crashed vehicles, structural collapses, trench collapses, cave and tunnel emergencies, water and ice emergencies, elevator emergencies, energized electrical line emergencies, and industrial accidents.[3] In less common circumstances, Firefighters rescue victims from hazardous materials emergencies as well as steep cliffs, embankment and high rises - The latter is referred to as High Angle Rescue, or Rope Rescue. Many fire departments, including most in the United Kingdom, refer to themselves as a fire and rescue service for this reason. Large fire departments, such as the New York City Fire Department and London Fire Brigade, have specialist teams for advanced technical rescue. As building fires have been in decline for many years in developed countries such as the United States, rescues other than fires make up an increasing proportion of their firefighters' work.[4]
|
26 |
+
|
27 |
+
Firefighters frequently provide some degree of emergency medical care. In some jurisdictions first aid is the only medical training that firefighters have, and medical-only calls are the sole responsibility of a separate emergency medical services (EMS) agency. Elsewhere, it is common for firefighters to respond to medical-only calls. The impetus for this is the growing demand for emergency medicine and the decline of fires and traditional firefighting call-outs[4]—though fire departments still have to be able to respond to them—and their existing ability to respond rapidly to emergencies. A rapid response is particularly necessary for cardiac arrests, as these will lead to death if not treated within minutes.[5]
|
28 |
+
|
29 |
+
The dispatch of firefighters to medical emergencies is particularly common in fire departments that run the EMS, including most large cities of the United States. In those departments, firefighters are often jointly trained as emergency medical technicians in order to deliver basic life support, and more rarely as paramedics to deliver advanced life support. In the United Kingdom, where fire services and EMS are run separately, fire service co-responding has been introduced more recently.[6] Another point of variation is whether the firefighters respond in a fire engine or a response car.[7] Either way, separate employees to crew ambulances are still needed, unless the firefighters can work shifts on the ambulances.
|
30 |
+
|
31 |
+
Airports employ specialist firefighters to deal with potential ground emergencies. Due to the mass casualty potential of an aviation emergency, the speed with which emergency response equipment and personnel arrive at the scene of the emergency is of paramount importance. When dealing with an emergency, the airport firefighters are tasked with rapidly securing the aircraft, its crew and its passengers from all hazards, particularly fire. Airport firefighters have advanced training in the application of firefighting foams, dry chemical and clean agents used to extinguish burning aviation fuel.
|
32 |
+
|
33 |
+
Fire departments are usually the primary agency that responds to an emergency involving hazardous materials. Specialized firefighters, known as hazardous materials technicians, have training and certification in chemical identification, leak control, decontamination, and clean-up procedures.
|
34 |
+
|
35 |
+
Fire departments frequently provide advice to the public on how to prevent fires in the home and work-place environments. Fire inspectors or fire marshals will directly inspect businesses to ensure they are up to the current building fire codes,[8][9] which are enforced so that a building can sufficiently resist fire spread, potential hazards are located, and to ensure that occupants can be safely evacuated, commensurate with the risks involved.
|
36 |
+
|
37 |
+
Fire suppression systems have a proven record for controlling and extinguishing unwanted fires. Many fire officials recommend that every building, including residences, have fire sprinkler systems.[10] Correctly working sprinklers in a residence greatly reduce the risk of death from a fire.[11] With the small rooms typical of a residence, one or two sprinklers can cover most rooms. In the United States, the housing industry trade groups have lobbied at the State level to prevent the requirement for Fire Sprinklers in 1 and 2 bedroom homes.[12][13]
|
38 |
+
|
39 |
+
Other methods of fire prevention are by directing efforts to reduce known hazardous conditions or by preventing dangerous acts before tragedy strikes. This is normally accomplished in many innovative ways such as conducting presentations, distributing safety brochures, providing news articles, writing public safety announcements (PSA) or establishing meaningful displays in well-visited areas. Ensuring that each household has working smoke alarms, is educated in the proper techniques of fire safety, has an evacuation route and rendezvous point is of top priority in public education for most fire prevention teams in almost all fire department localities.
|
40 |
+
|
41 |
+
Fire investigators, who are experienced firefighters trained in fire cause determinism, are dispatched to fire scenes, in order to investigate and determine whether the fire was a result of an accident or intentional. Some fire investigators have full law enforcement powers to investigate and arrest suspected arsonists.
|
42 |
+
|
43 |
+
To allow protection from the inherent risks of fighting fires, firefighters wear and carry protective and self-rescue equipment at all times. A self-contained breathing apparatus (SCBA) delivers air to the firefighter through a full face mask and is worn to protect against smoke inhalation, toxic fumes, and super heated gases. A special device called a Personal Alert Safety System (PASS) is commonly worn independently or as a part of the SCBA to alert others when a firefighter stops moving for a specified period of time or manually operates the device. The PASS device sounds an alarm that can assist another firefighter (firefighter assist and search team (FAST), or rapid intervention team (RIT), in locating the firefighter in distress.
|
44 |
+
|
45 |
+
Firefighters often carry personal self-rescue ropes. The ropes are generally 30 feet long and can provide a firefighter (that has enough time to deploy the rope) a partially controlled exit out of an elevated window. Lack of a personal rescue rope is cited in the deaths of two New York City Firefighters, Lt. John Bellew and Lt. Curtis Meyran, who died after they jumped from the fourth floor of a burning apartment building in the Bronx. Of the four firefighters who jumped and survived, only one of them had a self-rescue rope. Since the incident, the Fire Department of New York City has issued self-rescue ropes to their firefighters.[14]
|
46 |
+
|
47 |
+
Heat injury is a major issue for firefighters as they wear insulated clothing and cannot shed the heat generated from physical exertion. Early detection of heat issues is critical to stop dehydration and heat stress becoming fatal. Early onset of heat stress affects cognitive function which combined with operating in dangerous environment makes heat stress and dehydration a critical issue to monitor. Firefighter physiological status monitoring is showing promise in alerting EMS and commanders to the status of their people on the fire ground. Devices such as PASS device alert 10–20 seconds after a firefighter has stopped moving in a structure. Physiological status monitors measure a firefighter's vital sign status, fatigue and exertion levels and transmit this information over their voice radio. This technology allows a degree of early warning to physiological stress. These devices[15] are similar to technology developed for Future Force Warrior and give a measure of exertion and fatigue. They also tell the people outside a building when they have stopped moving or fallen. This allows a supervisor to call in additional engines before the crew get exhausted and also gives an early warning to firefighters before they run out of air, as they may not be able to make voice calls over their radio. Current OSHA tables exist for heat injury and the allowable amount of work in a given environment based on temperature, humidity and solar loading.[16]
|
48 |
+
|
49 |
+
Firefighters are also at risk for developing rhabdomyolysis. Rhabdomyolysis is the breakdown of muscle tissue and has many causes including heat exposure, high core body temperature, and prolonged, intense exertion. Routine firefighter tasks, such as carrying extra weight of equipment and working in hot environments, can increase firefighters’ risk for rhabdomyolysis.[17][18]
|
50 |
+
|
51 |
+
Another leading cause of death during firefighting is structural collapse of a burning building (e.g. a wall, floor, ceiling, roof, or truss system). Structural collapse, which often occurs without warning, may crush or trap firefighters inside the structure. To avoid loss of life, all on-duty firefighters should maintain two-way communication with the incident commander and be equipped with a personal alert safety system device on all fire scenes and maintain radio communication on all incidents(PASS).[19][20] Francis Brannigan was the founder and greatest contributor to this element of firefighter safety.
|
52 |
+
|
53 |
+
In the United States, 25% of fatalities of firefighters are caused by traffic collisions while responding to or returning from an incident. Other firefighters have been injured or killed by vehicles at the scene of a fire or emergency (Paulison 2005). A common measure fire departments have taken to prevent this is to require firefighters to wear a bright yellow reflective vest over their turnout coats if they have to work on a public road, to make them more visible to passing drivers.[21] In addition to the direct dangers of firefighting, cardiovascular diseases account for approximately 45% of on duty firefighter deaths.[22]
|
54 |
+
|
55 |
+
Firefighters have sometimes been assaulted by members of the public while responding to calls. These kinds of attacks can cause firefighters to fear for their safety and may cause them to not have full focus on the situation which could result in injury to their selves or the patient.[citation needed]
|
56 |
+
|
57 |
+
Once extinguished, fire debris cleanup poses several safety and health risks for workers.[23][24]
|
58 |
+
|
59 |
+
Many hazardous substances are commonly found in fire debris. Silica can be found in concrete, roofing tiles, or it may be a naturally occurring element. Occupational exposures to silica dust can cause silicosis, lung cancer, pulmonary tuberculosis, airway diseases, and some additional non-respiratory diseases.[25] Inhalation of asbestos can result in various diseases including asbestosis, lung cancer, and mesothelioma.[26] Sources of metals exposure include burnt or melted electronics, cars, refrigerators, stoves, etc. Fire debris cleanup workers may be exposed to these metals or their combustion products in the air or on their skin. These metals may include beryllium, cadmium, chromium, cobalt, lead, manganese, nickel, and many more.[23] Polyaromatic hydrocarbons (PAHs), some of which are carcinogenic, come from the incomplete combustion of organic materials and are often found as a result of structural and wildland fires.[27]
|
60 |
+
|
61 |
+
Safety hazards of fire cleanup include the risk of reignition of smoldering debris, electrocution from downed or exposed electrical lines or in instances where water has come into contact with electrical equipment. Structures that have been burned may be unstable and at risk of sudden collapse.[24][28]
|
62 |
+
|
63 |
+
Standard personal protective equipment for fire cleanup include hard hats, goggles or safety glasses, heavy work gloves, earplugs or other hearing protection, steel-toe boots, and fall protection devices.[28][29] Hazard controls for electrical injury include assuming all power lines are energized until confirmation they are de-energized, and grounding power lines to guard against electrical feedback, and using appropriate personal protective equipment.[28] Proper respiratory protection can protect against hazardous substances. Proper ventilation of an area is an engineering control that can be used to avoid or minimize exposure to hazardous substances. When ventilation is insufficient or dust cannot be avoided, personal protective equipment such as N95 respirators can be used.[28][30]
|
64 |
+
|
65 |
+
Firefighting has long been associated with poor cardiovascular outcomes. In the United States, the most common cause of on-duty fatalities for firefighters is sudden cardiac death. In addition to personal factors that may predispose an individual to coronary artery disease or other cardiovascular diseases, occupational exposures can significantly increase a firefighter's risk. Historically, the fire service blamed poor firefighter physical condition for being the primary cause of cardiovascular related deaths. However, over the last 20 years, studies and research has indicated the toxic gasses put fire service personnel at significantly higher risk for cardiovascular related conditions and death. For instance, carbon monoxide, present in nearly all fire environments, and hydrogen cyanide, formed during the combustion of paper, cotton, plastics, and other substances containing carbon and nitrogen. The substances inside of materials change during combustion their bi-products interfere with the transport of oxygen in the body. Hypoxia can then lead to heart injury. In addition, chronic exposure to particulate matter in smoke is associated with atherosclerosis. Noise exposures may contribute to hypertension and possibly ischemic heart disease. Other factors associated with firefighting, such as stress, heat stress, and heavy physical exertion, also increase the risk of cardiovascular events.[31]
|
66 |
+
|
67 |
+
During fire suppression activities a firefighter can reach peak or near peak heart rates which can act as a trigger for a cardiac event. For example, tachycardia can cause plaque buildup to break loose and lodge itself is a small part of the heart causing myocardial infarction, also known as a heart attack. This along with unhealthy habits and lack of exercise can be very hazardous to firefighter health.[32]
|
68 |
+
|
69 |
+
A 2015 retrospective longitudinal study showed that firefighters are at higher risk for certain types of cancer. Firefighters had mesothelioma, which is caused by asbestos exposure, at twice the rate of the non-firefighting working population. Younger firefighters (under age 65) also developed bladder cancer and prostate cancer at higher rates than the general population. The risk of bladder cancer may be present in female firefighters, but research is inconclusive as of 2014.[33][34] Preliminary research from 2015 on a large cohort of US firefighters showed a direct relationship between the number of hours spent fighting fires and lung cancer and leukemia mortality in firefighters. This link is a topic of continuing research in the medical community, as is cancer mortality in general among firefighters.[35]
|
70 |
+
|
71 |
+
Firefighters are exposed to a variety of carcinogens at fires, including both carcinogenic chemicals and radiation (alpha radiation, beta radiation, and gamma radiation).[36]
|
72 |
+
|
73 |
+
As with other emergency workers, firefighters may witness traumatic scenes during their careers. They are thus more vulnerable than most people to certain mental health issues such as post-traumatic stress disorder[37][38] and suicidal thoughts and behaviors.[39][40] Among women in the US, the occupations with the highest suicide rates are police and firefighters, with a rate of 14.1 per 100 000, according to the National Center for Injury Prevention and Control, CDC.[41] Chronic stress over time attributes to symptoms that affect first responders, such as anxiousness, irritability, nervousness, memory and concentration problems can occur overtime which can lead to anxiety and depression. Mental stress can have long lasting affects on the brain.[42] A 2014 report from the National Fallen Firefighters Foundation found that a fire department is three times more likely to experience a suicide in a given year than a line-of-duty death.[43] Mental stress of the job can lead to substance abuse and alcohol abuse as ways of coping with the stress.[44] The mental stress of fire fighting has a lot of different causes. There are those they see on duty and also what they miss by being on duty. Firefighters schedules fluctuate by district. There are stations where fire fighters work 48 hours on and 48 hours off. Some allow 24 hours on and 72 hours off[45] . The mental impact of missing your child's first steps or a ballet recital can take a heavy impact on first responders. There is also the stress of being on opposite shifts as your spouse or being away from family.
|
74 |
+
|
75 |
+
Another long-term risk factor from firefighting is exposure to high levels of sound, which can cause noise-induced hearing loss (NIHL) and tinnitus.[46][47] NIHL affects sound frequencies between 3,000 and 6,000 Hertz first, then with more frequent exposure, will spread to more frequencies.[47] Many consonants will be more difficult to hear or inaudible with NIHL because of the higher frequencies effected, which results in poorer communication.[47] NIHL is caused by exposure to sound levels at or above 85dBA according to NIOSH and at or above 90dBA according to OSHA.[47] dBA represents A-weighted decibels. dBA is used for measuring sound levels relating to occupational sound exposure since it attempts to mimic the sensitivity of the human ear to different frequencies of sound.[47] OSHA uses a 5-dBA exchange rate, which means that for every 5dBA increase in sound from 90dBA, the acceptable exposure time before a risk of permanent hearing loss occurs decreases by half (starting with 8 hours acceptable exposure time at 90dBA).[47][48] NIOSH uses a 3-dBA exchange rate starting at 8 hours acceptable exposure time at 85dBA.[47][49]
|
76 |
+
|
77 |
+
The time of exposure required to potentially cause damage depends on the level of sound exposed to.[49] The most common causes of excessive sound exposure are sirens, transportation to and from fires, fire alarms, and work tools.[46] Traveling in an emergency vehicle has shown to expose a person to between 103 and 114dBA of sound. According to OSHA, exposure at this level is acceptable for between 17 and 78 minutes[48] and according to NIOSH is acceptable for between 35 seconds and 7.5 minutes [49] over a 24-hour day before permanent hearing loss can occur. This time period considers that no other high level sound exposure occurs in that 24-hour time frame.[49] Sirens often output about 120 dBA, which according to OSHA, 7.5 minutes of exposure is needed[48] and according to NIOSH, 9 seconds of exposure is needed[49] in a 24-hour time period before permanent hearing loss can occur. In addition to high sound levels, another risk factor for hearing disorders is the co-exposure to chemicals that are ototoxic.[50]
|
78 |
+
|
79 |
+
The average day of work for a firefighter can often be under the sound exposure limit for both OSHA and NIOSH.[47] While the average day of sound exposure as a firefighter is often under the limit, firefighters can be exposed to impulse noise, which has a very low acceptable time exposure before permanent hearing damage can occur due to the high intensity and short duration.[46]
|
80 |
+
|
81 |
+
There are also high rates of hearing loss, often NIHL, in firefighters, which increases with age and number of years working as a firefighter.[46][51] Hearing loss prevention programs have been implemented in multiple stations and have shown to help lower the rate of firefighters with NIHL.[47] Other attempts have been made to lower sound exposures for firefighters, such as enclosing the cabs of the firetrucks to lower the siren exposure while driving.[47] NFPA (National Fire Protection Association) is responsible for occupational health programs and standards in firefighters which discusses what hearing sensitivity is required to work as a firefighter, but also enforces baseline (initial) and annual hearing tests (based on OSHA hearing maintenance regulations).[46] While NIHL can be a risk that occurs from working as a firefighter, NIHL can also be a safety concern for communicating while doing the job as communicating with coworkers and victims is essential for safety.[46] Hearing protection devices have been used by firefighters in the United States.[47] Earmuffs are the most commonly used hearing protection device (HPD) as they are the most easy to put on correctly in a quick manner.[47] Multiple fire departments have used HPDs that have communication devices built in, allowing firefighters to speak with each other at safe, but audible sound levels, while lowering the hazardous sound levels around them.[47]
|
82 |
+
|
83 |
+
In a country with a comprehensive fire service, fire departments must be able to send firefighters to emergencies at any hour of day or night, to arrive on the scene within minutes. In urban areas, this means that full-time paid firefighters usually have shift work, with some providing cover each night. On the other hand, it may not be practical to employ full-time firefighters in villages and isolated small towns, where their services may not be required for days at a time. For this reason, many fire departments have firefighters who spend long periods on call to respond to infrequent emergencies; they may have regular jobs outside of firefighting. Whether they are paid or not varies by country. In the United States and Germany, volunteer fire departments provide most of the cover in rural areas. In the United Kingdom and Ireland, by contrast, actual volunteers are rare. Instead, "retained firefighters" are paid for responding to incidents, along with a small salary for spending long periods of time on call.
|
84 |
+
|
85 |
+
A key difference between many country's fire services is what the balance is between full-time and volunteer (or on-call) firefighters. In the United States and United Kingdom, large metropolitan fire departments are almost entirely made up of full-time firefighters. On the other hand, in Germany and Austria,[52] volunteers play a substantial role even in the largest fire departments, including Berlin's, which serves a population of 3.6 million. Regardless of how this balance works, a common feature is that smaller urban areas have a mix of full-time and volunteer/on-call firefighters. This is known in the United States as a combination fire department. In Chile and Peru, all firefighters are volunteers.[53]
|
86 |
+
|
87 |
+
Another point of variation is how the fire services are organized. Some countries like Israel and New Zealand have a single national fire service. Others like Australia, the United Kingdom and France organize fire services based on regions or sub-national states. In the United States, Germany and Canada, fire departments are run at a municipal level.
|
88 |
+
|
89 |
+
Atypically, Singapore and many parts of Switzerland have fire service conscription.[54][55] In Germany, conscription can also be used if a village does not have a functioning fire service. Other unusual arrangements are seen in France, where two of the country's fire services (the Paris Fire Brigade and the Marseille Naval Fire Battalion) are part of the armed forces, and Denmark, where most fire services are run by private companies.[56]
|
90 |
+
|
91 |
+
Another way in which a firefighter's work varies around the world is the nature of firefighting equipment and tactics. For example, American fire departments make heavier use of aerial appliances, and are often split between engine and ladder companies. In Europe, where the size and usefulness of aerial appliances are often limited by narrow streets, they are only used for rescues, and firefighters can rotate between working on an engine and an aerial appliance.
|
92 |
+
[57][56] A final point in variation is how involved firefighters are in emergency medical services.
|
93 |
+
|
94 |
+
The expedient and accurate handling of fire alarms or calls are significant factors in the successful outcome of any incident. Fire department communications play a critical role in that successful outcome. Fire department communications include the methods by which the public can notify the communications center of an emergency, the methods by which the center can notify the proper fire fighting forces, and the methods by which information is exchanged at the scene. One method is to use a megaphone to communicate.
|
95 |
+
|
96 |
+
A telecommunicator (often referred to as a 000 Operator)[citation needed] has a role different from but just as important as other emergency personnel. The telecommunicator must process calls from unknown and unseen individuals, usually calling under stressful conditions. He/she must be able to obtain complete, reliable information from the caller and prioritize requests for assistance. It is the dispatcher's responsibility to bring order to chaos.
|
97 |
+
|
98 |
+
While some fire departments are large enough to utilize their own telecommunication dispatcher, most rural and small areas rely on a central dispatcher to provide handling of fire, rescue, and police services.
|
99 |
+
|
100 |
+
Firefighters are trained to use communications equipment to receive alarms, give and receive commands, request assistance, and report on conditions. Since firefighters from different agencies routinely provide mutual aid to each other, and routinely operate at incidents where other emergency services are present, it is essential to have structures in place to establish a unified chain of command, and share information between agencies. The U.S. Federal Emergency Management Agency (FEMA) has established a National Incident Management System.[58] One component of this system is the Incident Command System.
|
101 |
+
|
102 |
+
All radio communication in the United States is under authorization from the Federal Communications Commission (FCC); as such, fire departments that operate radio equipment must have radio licenses from the FCC.
|
103 |
+
|
104 |
+
Ten codes were popular in the early days of radio equipment because of poor transmission and reception. Advances in modern radio technology have reduced the need for ten-codes and many departments have converted to simple English (clear text).
|
105 |
+
|
106 |
+
Many firefighters are sworn members with command structures similar to the military and police. They do not have general police powers (some firefighters in the United States have limited police powers, like fire police departments, while certain fire marshals have full police powers, i.e. the ability to make warrantless arrests, and authority to carry a firearm on and off-duty), but have specific powers of enforcement and control in fire and emergency situations.
|
107 |
+
|
108 |
+
The basic unit of an American fire department is a "company", a group of firefighters who typically work on the same engine. A "crew" or "platoon" is a subdivision of a company who work on the same shift. Commonwealth fire services are more likely to be organized around a "watch", who work the same shift on multiple engines.[59]
|
109 |
+
|
110 |
+
New South Wales Rural Fire Service
|
111 |
+
|
112 |
+
New rank structure of 2015.
|
113 |
+
|
114 |
+
Ranks amongst Canadian firefighters vary across the country and ranking appears mostly with larger departments:
|
115 |
+
|
116 |
+
Toronto
|
117 |
+
|
118 |
+
Montreal
|
119 |
+
|
120 |
+
Vancouver
|
121 |
+
|
122 |
+
Ranks are divided between Company Officers and Fire Department Officers, which can be subdivided between Active Officers (Field Officers) and Administrative Officers each. The active officers are the captain, and three or four lieutenants, these four active officers are distinguished by red lines on their helmets.
|
123 |
+
|
124 |
+
Most fire brigades in Commonwealth countries (except Canada) have a more "civilianised" nomenclature, structured in a traditional manner. For example, the common structure in United Kingdom brigades is:
|
125 |
+
|
126 |
+
French civilian fire services, which historically are derived from French army sapper units, use French Army ranks. The highest rank in many departments is full colonel. Only the NCO rank of major is used in both the Paris Fire Brigade and the Marseille Naval Fire Battalion; since 2013 it has been abolished in the other fire departments.
|
127 |
+
|
128 |
+
In Germany every federal state has its own civil protection laws thus they have different rank systems. Additionally, in the volunteer fire departments, there is a difference between a rank and an official position. This is founded on the military traditions of the fire departments. Every firefighter can hold a high rank without having an official position. A firefighter can be promoted by years of service, training skills and qualifications. Official positions are partly elected or given by capabilities. These conditions allow that older ordinary firefighters have higher ranks than their leaders. But through this ranks are no authorities given (Brevet).
|
129 |
+
|
130 |
+
Completed vocational training in a technical occupation suitable for the fire service. Basic firefighter training.
|
131 |
+
|
132 |
+
Bachelor of Engineering and two years departmental training.
|
133 |
+
|
134 |
+
Master of Engineering and two years of departmental training.
|
135 |
+
|
136 |
+
Firefighters in Indonesia form part of the civil service of local governments and wear variant forms of uniforms worn by civil servants and employees.
|
137 |
+
|
138 |
+
The Vigili del Fuoco, (literally the word "Vigili" comes from the Latin word "Vigiles", which means "who is part of certain guards") have the official name of Corpo nazionale dei vigili del fuoco (CNVVF, National Firefighters Corps).
|
139 |
+
|
140 |
+
The CNVVF is the Italian institutional agency for fire and rescue service. It is part of the Ministry of Interior's Department of Firefighters, Public Rescue and Public Protection. The CNVVF task is to provide safety for people, animals and property, and control the compliance of buildings and industries to fire safety rules. The Ministry of the Interior, through the CNVVF, adopts fire safety rules with ministerial decrees or other lower rank documents. The CNVVF also ensures public rescue in emergencies that involves the use of chemical weapons, bacteriological, radiological and materials. Since 2012 the Corps uses its own rank titles (dating from 2007) with matching military styled insignia in honor of its origins.
|
141 |
+
|
142 |
+
In 2016 the CNVVF has been committed in forest firefighting activities together with the regional forest agencies, following the suppression of the National Forest Guards, which were merged into the Carabinieri (firefighters were integrated into the CNVVF).
|
143 |
+
.
|
144 |
+
|
145 |
+
In Iran, every city has its own fire department, but ranks are the same in the whole country, and are as follows:
|
146 |
+
|
147 |
+
In Ireland, the traditional brigade rank structure is still adopted. Below is the common structure for most brigades, Cork and Dublin Fire Brigade have additional ranks:
|
148 |
+
|
149 |
+
Japanese Fire Department's rank insignias are place on a small badge and pinned above the right pocket. Rank is told by stripes and Hexagram stars. The design of the insignias came from older Japanese style military insignias. Officers and Team Leaders could wear an arm band on the arm of fire jacket to show status as command leader. Sometimes rank can be shown as different color fire jacket for command staff. The color whites and gray are reserved for EMS. Orange is reserved for rescuer.
|
150 |
+
|
151 |
+
Grand-Ducal Fire and Rescue Corps of Luxembourg.
|
152 |
+
|
153 |
+
Aspirant Brigadier
|
154 |
+
|
155 |
+
Brigadier
|
156 |
+
|
157 |
+
Corporal
|
158 |
+
|
159 |
+
Chief Corporal
|
160 |
+
|
161 |
+
Chief Corporal 1st Class
|
162 |
+
|
163 |
+
Sergeant
|
164 |
+
|
165 |
+
Chief Sergeant
|
166 |
+
|
167 |
+
Sergeant major
|
168 |
+
|
169 |
+
Aspirant Adjutant
|
170 |
+
|
171 |
+
Adjutant
|
172 |
+
|
173 |
+
Chief Adjutant
|
174 |
+
|
175 |
+
Adjutant major
|
176 |
+
|
177 |
+
Aspirant Lieutenant
|
178 |
+
|
179 |
+
Lieutenant
|
180 |
+
|
181 |
+
Lieutenant 1st Class
|
182 |
+
|
183 |
+
Captain
|
184 |
+
|
185 |
+
Major
|
186 |
+
|
187 |
+
Lieutenant Colonel
|
188 |
+
|
189 |
+
Colonel
|
190 |
+
|
191 |
+
Director General
|
192 |
+
|
193 |
+
In New Zealand, rank is shown on epaulettes on firefighters' station uniform, and through colors and stripes on firefighter helmets. As the nation only has a single fire department, the New Zealand Fire Service, ranks are consistent through the country.
|
194 |
+
|
195 |
+
In the Russian Federation, the decals are applied symmetrically on both sides of the helmet (front and rear). The location of the decals on the special clothing and SCBA is established for each fire department of the same type within the territorial entity. The following ranks are used by State Fire Service civilian personnel, while military personnel use ranks similar to those of the Police of Russia, due to their pre-2001 history as the fire service of the Ministry of Internal Affairs of the Russian Federation before all firefighting services were transferred to the Ministry of Emergency Situations.
|
196 |
+
|
197 |
+
Tunisian firefighter's ranks are actually the same as the army, police and national garde.
|
198 |
+
|
199 |
+
In the United States, helmet colors often denote a fire fighter's rank or position. In general, white helmets denote chief officers, while red helmets may denote company officers, but the specific meaning of a helmet's color or style varies from region to region and department to department. The rank of an officer in an American fire department is most commonly denoted by a number of speaking trumpets, a reference to a megaphone-like device used in the early days of the fire service, although typically called "bugle" in today's parlance. Ranks proceed from one (lieutenant) to five (fire chief) bugles. Traditional ranks in American fire departments that exist but may not always be utilized in all cities or towns include:
|
200 |
+
|
201 |
+
Chief/Commissioner
|
202 |
+
|
203 |
+
In many fire departments in the U.S., the captain is commonly the commander of a company and a lieutenant is the supervisor of the company's firefighters on shift. There is no state or federal rank structure for firefighters and each municipality or volunteer fire department creates and uses their own unique structure.
|
204 |
+
|
205 |
+
Still, some other American fire departments such the FDNY use military rank insignia in addition or instead of the traditional bugles. Additionally, officers on truck companies have been known to use rank insignias shaped like axes for Lieutenants (1) and Captains (2).
|
206 |
+
|
207 |
+
Turkish firefighters in MOPP 4 level protective gear during an exercise held at Incirlik Air Base, Turkey
|
208 |
+
|
209 |
+
Toronto firefighters prepare their equipment
|
210 |
+
|
211 |
+
A firefighter using a hydraulic cutter during a demonstration
|
212 |
+
|
213 |
+
British naval men in firefighting gear on HMS Illustrious (R06), Liverpool, 25 October 2009
|
214 |
+
|
215 |
+
A partial list of some equipment typically used by firefighters:
|
216 |
+
|
217 |
+
Although people have fought fires since there have been valuable things to burn, the first instance of organized professionals combating structural fires occurred in
|
218 |
+
ancient Egypt. Likewise, fire fighters of the Roman Republic existed solely as privately organized and funded groups that operated more similarly to a business than a public service; however, during the Principate period, Augustus revolutionized firefighting by calling for the creation of a fire guard that was trained, paid, and equipped by the state, thereby commissioning the first truly public and professional firefighting service. Known as the Vigiles, they were organised into cohorts, serving as a night watch and city police force.
|
219 |
+
|
220 |
+
The earliest American fire departments were volunteers, including the volunteer fire company in New Amsterdam, now known as New York.[62] Fire companies were composed of citizens who volunteered their time to help protect the community. As time progressed and new towns were established throughout the region, there was a sharp increase in the number of volunteer departments.
|
221 |
+
|
222 |
+
In 1853, the first career fire department in the United States was established in Cincinnati, Ohio, followed four years later by St. Louis Fire Department. Large cities began establishing paid, full-time staff in order to try facilitate greater call volume.
|
223 |
+
|
224 |
+
City fire departments draw their funding directly from city taxes and share the same budget as other public works like the police department and trash services. The primary difference between municipality departments and city departments is the funding source. Municipal fire departments do not share their budget with any other service and are considered to be private entities within a jurisdiction. This means that they have their own taxes that feed into their budgeting needs. City fire departments report to the mayor, whereas municipal departments are accountable to elected board officials who help maintain and run the department along with the chief officer staff.[citation needed]
|
225 |
+
|
226 |
+
Funds for firefighting equipment may be raised by the firefighters themselves, especially in the case of volunteer organizations. Events such as pancake breakfasts and chili feeds are common in the United States. Social events are used to raise money include dances, fairs, and car washes.
|
227 |
+
|
228 |
+
Media related to Firefighter at Wikimedia Commons
|
229 |
+
|
230 |
+
Fact Sheet for Firefighters and EMS providers regarding risks for exposure to COVID-19, Centers for Disease Control and Prevention.
|
en/4725.html.txt
ADDED
@@ -0,0 +1,48 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
A pony is a small horse (Equus ferus caballus). Depending on the context, a pony may be a horse that is under an approximate or exact height at the withers or a small horse with a specific conformation and temperament. A pony is typically under the height of 14.2 hands high. There are many different breeds. Compared to other horses, ponies often exhibit thick manes, tails and overall coat, as well as proportionally shorter legs, wider barrels, heavier bone, thicker necks, and shorter heads with broader foreheads. The word pony derives from the old French poulenet, meaning foal, a young, immature horse, but this is not the modern meaning; unlike a horse foal, a pony remains small when fully grown. On occasion, people who are unfamiliar with horses may confuse an adult pony with a foal.
|
2 |
+
|
3 |
+
The ancestors of most modern ponies developed small stature because they lived on marginally livable horse habitat. These smaller animals were domesticated and bred for various purposes all over the Northern Hemisphere. Ponies were historically used for driving and freight transport, as children's mounts, for recreational riding, and later as competitors and performers in their own right. During the Industrial Revolution, particularly in Great Britain, a significant number were used as pit ponies, hauling loads of coal in the mines.
|
4 |
+
|
5 |
+
Ponies are generally considered intelligent and friendly. They are sometimes also described as stubborn or cunning. Properly trained ponies are appropriate mounts for children who are learning to ride. Larger ponies can be ridden by adults, as ponies are usually strong for their size. In modern use, many organizations define a pony as a mature horse that measures less than 14.3 hands (59 inches, 150 cm) at the withers, but there are a number of exceptions. Different organizations that use a strict measurement model vary from 14 hands (56 inches, 142 cm) to nearly 14.3 hands (59 inches, 150 cm). Many breeds classify an animal as either horse or pony based on pedigree and phenotype, no matter its height. Some full-sized horses may be called ponies as a term of endearment.
|
6 |
+
|
7 |
+
A group of ponies is called "a string of ponies," which dates back to a mention in the 15th century Harley Manuscript.[1]
|
8 |
+
|
9 |
+
For many forms of competition, the official definition of a pony is a horse that measures less than 14.2 hands (58 inches, 147 cm) at the withers. Standard horses are 14.2 or taller. The International Federation for Equestrian Sports defines the official cutoff point at 148 centimetres (58.3 in; 14.2 hands) without shoes and 149 centimetres (58.66 in; 14.2 1⁄2 hands) with shoes, though allows a margin for competition measurement of up to 150 centimetres (59.1 in; 14.3 hands) without shoes, or 151 centimetres (59.45 in; 14.3 1⁄2 hands) with shoes.[2] However, the term "pony" can be used in general (or affectionately) for any small horse, regardless of its actual size or breed. Furthermore, some horse breeds may have individuals who mature under that height but are still called "horses" and are allowed to compete as horses. In Australia, horses that measure from 14 to 15 hands (142 to 152 cm; 56 to 60 inches) are known as a "galloway", and ponies in Australia measure under 14 hands (56 inches, 142 cm).[3]
|
10 |
+
|
11 |
+
People who are unfamiliar with horses may confuse an adult pony with a young, immature horse. While foals that will grow up to be horse-sized may be no taller than some ponies in their first months of life, their body proportions are very different. A pony can be ridden and put to work, while a foal is too young to be ridden or used as a working animal. Foals, whether they grow up to be horse or pony-sized, can be distinguished from adult horses by their extremely long legs and slim bodies. Their heads and eyes also exhibit juvenile characteristics. Furthermore, in most cases, nursing foals will be in very close proximity to a mare who is the mother (dam) of the foal. While ponies exhibit some neoteny with the wide foreheads and small size, their body proportions are similar to that of an adult horse.
|
12 |
+
|
13 |
+
Ponies originally developed as a landrace adapted to a harsh natural environment, and were considered part of the "draft" subtype typical of Northern Europe. At one time, it was hypothesized that they may have descended from a wild "draft" subspecies of Equus ferus.[4] Studies of mitochondrial DNA (which is passed on though the female line) indicate that a large number of wild mares have contributed to modern domestic breeds;[5][6] in contrast, studies of y-DNA (passed down the male line) suggest that there was possibly just one single male ancestor of all domesticated breeds.[7] Domestication of the horse probably first occurred in the Eurasian steppes with horses of between 13 hands (52 inches, 132 cm) to over 14 hands (56 inches, 142 cm),[8] and as horse domestication spread, the male descendants of the original stallion went on to be bred with local wild mares.[7][8]
|
14 |
+
|
15 |
+
Domesticated ponies of all breeds originally developed mainly from the need for a working animal that could fulfill specific local draft and transportation needs while surviving in harsh environments. The usefulness of the pony was noted by farmers who observed that a pony could outperform a draft horse on small farms.[9]
|
16 |
+
|
17 |
+
By the 20th century, many pony breeds had Arabian and other blood added to make a more refined pony suitable for riding.[10]
|
18 |
+
|
19 |
+
Ponies are seen in many different equestrian pursuits. Some breeds, such as the Hackney pony, are primarily used for driving, while other breeds, such as the Connemara pony and Australian Pony, are used primarily for riding. Others, such as the Welsh pony, are used for both riding and driving.
|
20 |
+
|
21 |
+
There is no direct correlation between a horse's size and its inherent athletic ability.[11] Ponies compete at events that include show hunter, English riding on the flat, driving, and western riding classes at horse shows, as well as other competitive events such as gymkhana and combined driving. They are seen in casual pursuits such as trail riding, but a few ponies have performed in international-level competition. Though many exhibitors confine themselves to classes just for ponies, some top ponies are competitive against full-sized horses. For example, a 14.1-hand (57-inch; 145 cm) pony named Stroller was a member of the British Equestrian show jumping team, and won the silver medal at the 1968 Summer Olympics. More recently, the 14.1 3⁄4-hand (57.75-inch; 147 cm) pony Theodore O'Connor won the gold medal in eventing at the 2007 Pan American Games.
|
22 |
+
|
23 |
+
Pony Clubs, open to young people who own either horses or ponies, are formed worldwide to educate young people about horses, promote responsible horse ownership, and also sponsor competitive events for young people and smaller horses.
|
24 |
+
|
25 |
+
In many parts of the world ponies are also still used as working animals, as pack animals and for pulling various horse-drawn vehicles. They are used for children's pony rides at traveling carnivals and at children's private parties where small children can take short rides on ponies that are saddled and then either led individually or hitched to a "pony wheel" (a non-motorized device akin to a hot walker) that leads six to eight ponies at a time. Ponies are sometimes seen at summer camps for children, and are widely used for pony trekking and other forms of Equitourism riding holidays, often carrying adults as well as children.
|
26 |
+
Ponies are used for riding Kedarnath pilgrims in India.
|
27 |
+
|
28 |
+
Ponies are often distinguished by their phenotype, a stocky body, dense bone, round shape and well-sprung ribs. They have a short head, large eyes and small ears. In addition to being smaller than a horse, their legs are proportionately shorter. They have strong hooves and grow a heavier hair coat, seen in a thicker mane and tail as well as a particularly heavy winter coat.[12]
|
29 |
+
|
30 |
+
Pony breeds have developed all over the world, particularly in cold and harsh climates where hardy, sturdy working animals were needed. They are remarkably strong for their size. Breeds such as the Connemara pony are recognized for their ability to carry a full-sized adult rider. Pound for pound ponies can pull and carry more weight than a horse.[12] Draft-type ponies are able to pull loads significantly greater than their own weight, with larger ponies capable of pulling loads comparable to those pulled by full-sized draft horses, and even very small ponies are able to pull as much as 450 percent of their own weight.[13]
|
31 |
+
|
32 |
+
Nearly all pony breeds are very hardy, easy keepers that share the ability to thrive on a more limited diet than that of a regular-sized horse, requiring half the hay for their weight as a horse, and often not needing grain at all. However, for the same reason, they are also more vulnerable to laminitis and Cushing's syndrome. They may also have problems with hyperlipemia.[12]
|
33 |
+
|
34 |
+
Ponies are generally considered intelligent and friendly, though sometimes they also are described as stubborn or cunning.[12] The differences of opinion often result from an individual pony's degree of proper training. Ponies trained by inexperienced individuals, or only ridden by beginners, can turn out to be spoiled because their riders typically lack the experience base to correct bad habits. Properly trained ponies are appropriate mounts for children who are learning to ride. Larger ponies can be ridden by adults, as ponies are usually strong for their size.[12]
|
35 |
+
|
36 |
+
For showing purposes, ponies are often grouped into small, medium, and large sizes. Small ponies are 12.2 hands (50 inches, 127 cm) and under, medium ponies are over 12.2 but no taller than 13.2 hands (54 inches, 137 cm), and large ponies are over 13.2 hands (54 inches, 137 cm) but no taller than 14.2 hands (58 inches, 147 cm).
|
37 |
+
|
38 |
+
The smallest equines are called miniature horses by many of their breeders and breed organizations, rather than ponies, even though they stand smaller than small ponies,[12] usually no taller than 38 inches (97 cm; 9.2 hands) at the withers. However, there are also miniature pony breeds.
|
39 |
+
|
40 |
+
Some horse breeds are not defined as ponies, even when they have some animals that measure under 14.2 hands (58 inches, 147 cm). This is usually due to body build, traditional uses and overall physiology. Breeds that are considered horses regardless of height include the Arabian horse, American Quarter Horse and the Morgan horse, all of which have individual members both over and under 14.2 hands (58 inches, 147 cm).
|
41 |
+
|
42 |
+
Other horse breeds, such as Icelandic horse and Fjord horse, may sometimes be pony-sized or have some pony characteristics, such as a heavy coat, thick mane, and heavy bone, but are generally classified as "horses" by their respective registries.[12] In cases such as these, there can be considerable debate over whether to call certain breeds "horses" or "ponies." However, individual breed registries usually are the arbiters of such debates, weighing the relative horse and pony characteristics of a breed. In some breeds, such as the Welsh pony, the horse-versus-pony controversy is resolved by creating separate divisions for consistently horse-sized animals, such as the "Section D" Welsh Cob.
|
43 |
+
|
44 |
+
Some horses may be pony height due to environment more than genetics. For example, the Chincoteague pony, a feral horse that lives on Assateague Island off the coast of Virginia, often matures to the height of an average small horse when raised from a foal under domesticated conditions.[14]
|
45 |
+
|
46 |
+
Conversely, the term "pony" is occasionally used to describe horses of normal height. Horses used for polo are often called "polo ponies" regardless of height, even though they are often of Thoroughbred breeding and often well over 14.2 hands (58 inches, 147 cm). American Indian tribes also have the tradition of referring to their horses as "ponies," when speaking in English, even though many of the Mustang horses they used in the 19th century were close to or over 14.2 hands (58 inches, 147 cm), and most horses owned and bred by Native peoples today are of full horse height. The term "pony" is also sometimes used to describe a full-sized horse in a humorous or affectionate sense.
|
47 |
+
|
48 |
+
The United States Pony Club defines "pony" to be any mount that is ridden by a member regardless of its breed or size. Persons up to 25 years old are eligible for membership, and some of the members' "ponies" actually are full-size horses.
|
en/4726.html.txt
ADDED
@@ -0,0 +1,48 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
A pony is a small horse (Equus ferus caballus). Depending on the context, a pony may be a horse that is under an approximate or exact height at the withers or a small horse with a specific conformation and temperament. A pony is typically under the height of 14.2 hands high. There are many different breeds. Compared to other horses, ponies often exhibit thick manes, tails and overall coat, as well as proportionally shorter legs, wider barrels, heavier bone, thicker necks, and shorter heads with broader foreheads. The word pony derives from the old French poulenet, meaning foal, a young, immature horse, but this is not the modern meaning; unlike a horse foal, a pony remains small when fully grown. On occasion, people who are unfamiliar with horses may confuse an adult pony with a foal.
|
2 |
+
|
3 |
+
The ancestors of most modern ponies developed small stature because they lived on marginally livable horse habitat. These smaller animals were domesticated and bred for various purposes all over the Northern Hemisphere. Ponies were historically used for driving and freight transport, as children's mounts, for recreational riding, and later as competitors and performers in their own right. During the Industrial Revolution, particularly in Great Britain, a significant number were used as pit ponies, hauling loads of coal in the mines.
|
4 |
+
|
5 |
+
Ponies are generally considered intelligent and friendly. They are sometimes also described as stubborn or cunning. Properly trained ponies are appropriate mounts for children who are learning to ride. Larger ponies can be ridden by adults, as ponies are usually strong for their size. In modern use, many organizations define a pony as a mature horse that measures less than 14.3 hands (59 inches, 150 cm) at the withers, but there are a number of exceptions. Different organizations that use a strict measurement model vary from 14 hands (56 inches, 142 cm) to nearly 14.3 hands (59 inches, 150 cm). Many breeds classify an animal as either horse or pony based on pedigree and phenotype, no matter its height. Some full-sized horses may be called ponies as a term of endearment.
|
6 |
+
|
7 |
+
A group of ponies is called "a string of ponies," which dates back to a mention in the 15th century Harley Manuscript.[1]
|
8 |
+
|
9 |
+
For many forms of competition, the official definition of a pony is a horse that measures less than 14.2 hands (58 inches, 147 cm) at the withers. Standard horses are 14.2 or taller. The International Federation for Equestrian Sports defines the official cutoff point at 148 centimetres (58.3 in; 14.2 hands) without shoes and 149 centimetres (58.66 in; 14.2 1⁄2 hands) with shoes, though allows a margin for competition measurement of up to 150 centimetres (59.1 in; 14.3 hands) without shoes, or 151 centimetres (59.45 in; 14.3 1⁄2 hands) with shoes.[2] However, the term "pony" can be used in general (or affectionately) for any small horse, regardless of its actual size or breed. Furthermore, some horse breeds may have individuals who mature under that height but are still called "horses" and are allowed to compete as horses. In Australia, horses that measure from 14 to 15 hands (142 to 152 cm; 56 to 60 inches) are known as a "galloway", and ponies in Australia measure under 14 hands (56 inches, 142 cm).[3]
|
10 |
+
|
11 |
+
People who are unfamiliar with horses may confuse an adult pony with a young, immature horse. While foals that will grow up to be horse-sized may be no taller than some ponies in their first months of life, their body proportions are very different. A pony can be ridden and put to work, while a foal is too young to be ridden or used as a working animal. Foals, whether they grow up to be horse or pony-sized, can be distinguished from adult horses by their extremely long legs and slim bodies. Their heads and eyes also exhibit juvenile characteristics. Furthermore, in most cases, nursing foals will be in very close proximity to a mare who is the mother (dam) of the foal. While ponies exhibit some neoteny with the wide foreheads and small size, their body proportions are similar to that of an adult horse.
|
12 |
+
|
13 |
+
Ponies originally developed as a landrace adapted to a harsh natural environment, and were considered part of the "draft" subtype typical of Northern Europe. At one time, it was hypothesized that they may have descended from a wild "draft" subspecies of Equus ferus.[4] Studies of mitochondrial DNA (which is passed on though the female line) indicate that a large number of wild mares have contributed to modern domestic breeds;[5][6] in contrast, studies of y-DNA (passed down the male line) suggest that there was possibly just one single male ancestor of all domesticated breeds.[7] Domestication of the horse probably first occurred in the Eurasian steppes with horses of between 13 hands (52 inches, 132 cm) to over 14 hands (56 inches, 142 cm),[8] and as horse domestication spread, the male descendants of the original stallion went on to be bred with local wild mares.[7][8]
|
14 |
+
|
15 |
+
Domesticated ponies of all breeds originally developed mainly from the need for a working animal that could fulfill specific local draft and transportation needs while surviving in harsh environments. The usefulness of the pony was noted by farmers who observed that a pony could outperform a draft horse on small farms.[9]
|
16 |
+
|
17 |
+
By the 20th century, many pony breeds had Arabian and other blood added to make a more refined pony suitable for riding.[10]
|
18 |
+
|
19 |
+
Ponies are seen in many different equestrian pursuits. Some breeds, such as the Hackney pony, are primarily used for driving, while other breeds, such as the Connemara pony and Australian Pony, are used primarily for riding. Others, such as the Welsh pony, are used for both riding and driving.
|
20 |
+
|
21 |
+
There is no direct correlation between a horse's size and its inherent athletic ability.[11] Ponies compete at events that include show hunter, English riding on the flat, driving, and western riding classes at horse shows, as well as other competitive events such as gymkhana and combined driving. They are seen in casual pursuits such as trail riding, but a few ponies have performed in international-level competition. Though many exhibitors confine themselves to classes just for ponies, some top ponies are competitive against full-sized horses. For example, a 14.1-hand (57-inch; 145 cm) pony named Stroller was a member of the British Equestrian show jumping team, and won the silver medal at the 1968 Summer Olympics. More recently, the 14.1 3⁄4-hand (57.75-inch; 147 cm) pony Theodore O'Connor won the gold medal in eventing at the 2007 Pan American Games.
|
22 |
+
|
23 |
+
Pony Clubs, open to young people who own either horses or ponies, are formed worldwide to educate young people about horses, promote responsible horse ownership, and also sponsor competitive events for young people and smaller horses.
|
24 |
+
|
25 |
+
In many parts of the world ponies are also still used as working animals, as pack animals and for pulling various horse-drawn vehicles. They are used for children's pony rides at traveling carnivals and at children's private parties where small children can take short rides on ponies that are saddled and then either led individually or hitched to a "pony wheel" (a non-motorized device akin to a hot walker) that leads six to eight ponies at a time. Ponies are sometimes seen at summer camps for children, and are widely used for pony trekking and other forms of Equitourism riding holidays, often carrying adults as well as children.
|
26 |
+
Ponies are used for riding Kedarnath pilgrims in India.
|
27 |
+
|
28 |
+
Ponies are often distinguished by their phenotype, a stocky body, dense bone, round shape and well-sprung ribs. They have a short head, large eyes and small ears. In addition to being smaller than a horse, their legs are proportionately shorter. They have strong hooves and grow a heavier hair coat, seen in a thicker mane and tail as well as a particularly heavy winter coat.[12]
|
29 |
+
|
30 |
+
Pony breeds have developed all over the world, particularly in cold and harsh climates where hardy, sturdy working animals were needed. They are remarkably strong for their size. Breeds such as the Connemara pony are recognized for their ability to carry a full-sized adult rider. Pound for pound ponies can pull and carry more weight than a horse.[12] Draft-type ponies are able to pull loads significantly greater than their own weight, with larger ponies capable of pulling loads comparable to those pulled by full-sized draft horses, and even very small ponies are able to pull as much as 450 percent of their own weight.[13]
|
31 |
+
|
32 |
+
Nearly all pony breeds are very hardy, easy keepers that share the ability to thrive on a more limited diet than that of a regular-sized horse, requiring half the hay for their weight as a horse, and often not needing grain at all. However, for the same reason, they are also more vulnerable to laminitis and Cushing's syndrome. They may also have problems with hyperlipemia.[12]
|
33 |
+
|
34 |
+
Ponies are generally considered intelligent and friendly, though sometimes they also are described as stubborn or cunning.[12] The differences of opinion often result from an individual pony's degree of proper training. Ponies trained by inexperienced individuals, or only ridden by beginners, can turn out to be spoiled because their riders typically lack the experience base to correct bad habits. Properly trained ponies are appropriate mounts for children who are learning to ride. Larger ponies can be ridden by adults, as ponies are usually strong for their size.[12]
|
35 |
+
|
36 |
+
For showing purposes, ponies are often grouped into small, medium, and large sizes. Small ponies are 12.2 hands (50 inches, 127 cm) and under, medium ponies are over 12.2 but no taller than 13.2 hands (54 inches, 137 cm), and large ponies are over 13.2 hands (54 inches, 137 cm) but no taller than 14.2 hands (58 inches, 147 cm).
|
37 |
+
|
38 |
+
The smallest equines are called miniature horses by many of their breeders and breed organizations, rather than ponies, even though they stand smaller than small ponies,[12] usually no taller than 38 inches (97 cm; 9.2 hands) at the withers. However, there are also miniature pony breeds.
|
39 |
+
|
40 |
+
Some horse breeds are not defined as ponies, even when they have some animals that measure under 14.2 hands (58 inches, 147 cm). This is usually due to body build, traditional uses and overall physiology. Breeds that are considered horses regardless of height include the Arabian horse, American Quarter Horse and the Morgan horse, all of which have individual members both over and under 14.2 hands (58 inches, 147 cm).
|
41 |
+
|
42 |
+
Other horse breeds, such as Icelandic horse and Fjord horse, may sometimes be pony-sized or have some pony characteristics, such as a heavy coat, thick mane, and heavy bone, but are generally classified as "horses" by their respective registries.[12] In cases such as these, there can be considerable debate over whether to call certain breeds "horses" or "ponies." However, individual breed registries usually are the arbiters of such debates, weighing the relative horse and pony characteristics of a breed. In some breeds, such as the Welsh pony, the horse-versus-pony controversy is resolved by creating separate divisions for consistently horse-sized animals, such as the "Section D" Welsh Cob.
|
43 |
+
|
44 |
+
Some horses may be pony height due to environment more than genetics. For example, the Chincoteague pony, a feral horse that lives on Assateague Island off the coast of Virginia, often matures to the height of an average small horse when raised from a foal under domesticated conditions.[14]
|
45 |
+
|
46 |
+
Conversely, the term "pony" is occasionally used to describe horses of normal height. Horses used for polo are often called "polo ponies" regardless of height, even though they are often of Thoroughbred breeding and often well over 14.2 hands (58 inches, 147 cm). American Indian tribes also have the tradition of referring to their horses as "ponies," when speaking in English, even though many of the Mustang horses they used in the 19th century were close to or over 14.2 hands (58 inches, 147 cm), and most horses owned and bred by Native peoples today are of full horse height. The term "pony" is also sometimes used to describe a full-sized horse in a humorous or affectionate sense.
|
47 |
+
|
48 |
+
The United States Pony Club defines "pony" to be any mount that is ridden by a member regardless of its breed or size. Persons up to 25 years old are eligible for membership, and some of the members' "ponies" actually are full-size horses.
|
en/4727.html.txt
ADDED
@@ -0,0 +1,106 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
A bridge is a structure built to span a physical obstacle, such as a body of water, valley, or road, without closing the way underneath. It is constructed for the purpose of providing passage over the obstacle, usually something that can be detrimental to cross otherwise. There are many different designs that each serve a particular purpose and apply to different situations. Designs of bridges vary depending on the function of the bridge, the nature of the terrain where the bridge is constructed and anchored, the material used to make it, and the funds available to build it.
|
4 |
+
|
5 |
+
Most likely the earliest bridges were fallen trees and stepping stones, while Neolithic people built boardwalk bridges across marshland. The Arkadiko Bridge dating from the 13th century BC, in the Peloponnese, in southern Greece is one of the oldest arch bridges still in existence and use.
|
6 |
+
|
7 |
+
The Oxford English Dictionary traces the origin of the word bridge to an Old English word brycg, of the same meaning.[1] The word can be traced directly back to Proto-Indo-European *bʰrēw-. The word for the card game of the same name has a different origin.
|
8 |
+
|
9 |
+
The simplest type of a bridge is stepping stones, so this may have been one of the earliest types. Neolithic people also built a form of boardwalk across marshes, of which the Sweet Track and the Post Track, are examples from England that are around 6000 years old.[2] Undoubtedly ancient peoples would also have used log bridges; that is a timber bridge[3] that fall naturally or are intentionally felled or placed across streams. Some of the first man-made bridges with significant span were probably intentionally felled trees.[4]
|
10 |
+
|
11 |
+
Among the oldest timber bridges is the Holzbrücke Rapperswil-Hurden crossing upper Lake Zürich in Switzerland; the prehistoric timber piles discovered to the west of the Seedamm date back to 1523 BC. The first wooden footbridge led across Lake Zürich, followed by several reconstructions at least until the late 2nd century AD, when the Roman Empire built a 6-metre-wide (20 ft) wooden bridge. Between 1358 and 1360, Rudolf IV, Duke of Austria, built a 'new' wooden bridge across the lake that has been used to 1878 – measuring approximately 1,450 metres (4,760 ft) in length and 4 metres (13 ft) wide. On April 6, 2001, the reconstructed wooden footbridge was opened, being the longest wooden bridge in Switzerland.
|
12 |
+
|
13 |
+
The Arkadiko Bridge is one of four Mycenaean corbel arch bridges part of a former network of roads, designed to accommodate chariots, between the fort of Tiryns and town of Epidauros in the Peloponnese, in southern Greece. Dating to the Greek Bronze Age (13th century BC), it is one of the oldest arch bridges still in existence and use.
|
14 |
+
Several intact arched stone bridges from the Hellenistic era can be found in the Peloponnese.[5]
|
15 |
+
|
16 |
+
The greatest bridge builders of antiquity were the ancient Romans.[6] The Romans built arch bridges and aqueducts that could stand in conditions that would damage or destroy earlier designs. Some stand today.[7] An example is the Alcántara Bridge, built over the river Tagus, in Spain. The Romans also used cement, which reduced the variation of strength found in natural stone.[8] One type of cement, called pozzolana, consisted of water, lime, sand, and volcanic rock. Brick and mortar bridges were built after the Roman era, as the technology for cement was lost (then later rediscovered).
|
17 |
+
|
18 |
+
In India, the Arthashastra treatise by Kautilya mentions the construction of dams and bridges.[9] A Mauryan bridge near Girnar was surveyed by James Princep.[10] The bridge was swept away during a flood, and later repaired by Puspagupta, the chief architect of emperor Chandragupta I.[10] The use of stronger bridges using plaited bamboo and iron chain was visible in India by about the 4th century.[11] A number of bridges, both for military and commercial purposes, were constructed by the Mughal administration in India.[12]
|
19 |
+
|
20 |
+
Although large Chinese bridges of wooden construction existed at the time of the Warring States period, the oldest surviving stone bridge in China is the Zhaozhou Bridge, built from 595 to 605 AD during the Sui dynasty. This bridge is also historically significant as it is the world's oldest open-spandrel stone segmental arch bridge. European segmental arch bridges date back to at least the Alconétar Bridge (approximately 2nd century AD), while the enormous Roman era Trajan's Bridge (105 AD) featured open-spandrel segmental arches in wooden construction.[citation needed]
|
21 |
+
|
22 |
+
Rope bridges, a simple type of suspension bridge, were used by the Inca civilization in the Andes mountains of South America, just prior to European colonization in the 16th century.
|
23 |
+
|
24 |
+
During the 18th century there were many innovations in the design of timber bridges by Hans Ulrich Grubenmann, Johannes Grubenmann, and others. The first book on bridge engineering was written by Hubert Gautier in 1716.
|
25 |
+
|
26 |
+
A major breakthrough in bridge technology came with the erection of the Iron Bridge in Shropshire, England in 1779. It used cast iron for the first time as arches to cross the river Severn.[13] With the Industrial Revolution in the 19th century, truss systems of wrought iron were developed for larger bridges, but iron does not have the tensile strength to support large loads. With the advent of steel, which has a high tensile strength, much larger bridges were built, many using the ideas of Gustave Eiffel.[citation needed]
|
27 |
+
|
28 |
+
In Canada and the United States, numerous timber covered bridges were built in the late 1700s to the late 1800s, reminiscent of earlier designs in Germany and Switzerland. Some covered bridges were also built in Asia.[14] In later years, some were partly made of stone or metal but the trusses were usually still made of wood; in the United States, there were three styles of trusses, the Queen Post, the Burr Arch and the Town Lattice.[15] Hundreds of these structures still stand in North America. They were brought to the attention of the general public in the 1990s by the novel, movie, and play The Bridges of Madison County.[16][17]
|
29 |
+
|
30 |
+
In 1927 welding pioneer Stefan Bryła designed the first welded road bridge in the world, the Maurzyce Bridge which was later built across the river Słudwia at Maurzyce near Łowicz, Poland in 1929. In 1995, the American Welding Society presented the Historic Welded Structure Award for the bridge to Poland.[18]
|
31 |
+
|
32 |
+
Bridges can be categorized in several different ways. Common categories include the type of structural elements used, by what they carry, whether they are fixed or movable, and by the materials used.
|
33 |
+
|
34 |
+
Bridges may be classified by how the actions of tension, compression, bending, torsion and shear are distributed through their structure. Most bridges will employ all of these to some degree, but only a few will predominate. The separation of forces and moments may be quite clear. In a suspension or cable-stayed bridge, the elements in tension are distinct in shape and placement. In other cases the forces may be distributed among a large number of members, as in a truss.
|
35 |
+
|
36 |
+
The world's longest beam bridge is Lake Pontchartrain Causeway in southern Louisiana in the United States, at 23.83 miles (38.35 km), with individual spans of 56 feet (17 m).[21] Beam bridges are the simplest and oldest type of bridge in use today,[22] and are a popular type.[23]
|
37 |
+
|
38 |
+
Some cantilever bridges also have a smaller beam connecting the two cantilevers, for extra strength.
|
39 |
+
|
40 |
+
The largest cantilever bridge is the 549-metre (1,801 ft) Quebec Bridge in Quebec, Canada.
|
41 |
+
|
42 |
+
With the span of 220 metres (720 ft), the Solkan Bridge over the Soča River at Solkan in Slovenia is the second-largest stone bridge in the world and the longest railroad stone bridge. It was completed in 1905. Its arch, which was constructed from over 5,000 tonnes (4,900 long tons; 5,500 short tons) of stone blocks in just 18 days, is the second-largest stone arch in the world, surpassed only by the Friedensbrücke (Syratalviadukt) in Plauen, and the largest railroad stone arch. The arch of the Friedensbrücke, which was built in the same year, has the span of 90 m (295 ft) and crosses the valley of the Syrabach River. The difference between the two is that the Solkan Bridge was built from stone blocks, whereas the Friedensbrücke was built from a mixture of crushed stone and cement mortar.[24]
|
43 |
+
|
44 |
+
The world's largest arch bridge is the Chaotianmen Bridge over the Yangtze River with a length of 1,741 m (5,712 ft) and a span of 552 m (1,811 ft). The bridge was opened April 29, 2009, in Chongqing, China.[25]
|
45 |
+
|
46 |
+
The longest suspension bridge in the world is the 3,909 m (12,825 ft) Akashi Kaikyō Bridge in Japan.[27]
|
47 |
+
|
48 |
+
The longest cable-stayed bridge since 2012 is the 1,104 m (3,622 ft) Russky Bridge in Vladivostok, Russia.[31]
|
49 |
+
|
50 |
+
Some Engineers sub-divide 'beam' bridges into slab, beam-and-slab and box girder on the basis of their cross-section.[32] A slab can be solid or voided (though this is no longer favored for inspectability reasons) while beam-and-slab consists of concrete or steel girders connected by a concrete slab.[33] A box-girder cross-section consists of a single-cell or multi-cellular box. In recent years, integral bridge construction has also become popular.
|
51 |
+
|
52 |
+
Most bridges are fixed bridges, meaning they have no moving parts and stay in one place until they fail or are demolished. Temporary bridges, such as Bailey bridges, are designed to be assembled, and taken apart, transported to a different site, and re-used. They are important in military engineering, and are also used to carry traffic while an old bridge is being rebuilt. Movable bridges are designed to move out of the way of boats or other kinds of traffic, which would otherwise be too tall to fit. These are generally electrically powered.[34]
|
53 |
+
|
54 |
+
Double-decked (or double-decker) bridges have two levels, such as the George Washington Bridge, connecting New York City to Bergen County, New Jersey, US, as the world's busiest bridge, carrying 102 million vehicles annually;[35][36] truss work between the roadway levels provided stiffness to the roadways and reduced movement of the upper level when the lower level was installed three decades after the upper level. The Tsing Ma Bridge and Kap Shui Mun Bridge in Hong Kong have six lanes on their upper decks, and on their lower decks there are two lanes and a pair of tracks for MTR metro trains. Some double-decked bridges only use one level for street traffic; the Washington Avenue Bridge in Minneapolis reserves its lower level for automobile and light rail traffic and its upper level for pedestrian and bicycle traffic (predominantly students at the University of Minnesota). Likewise, in Toronto, the Prince Edward Viaduct has five lanes of motor traffic, bicycle lanes, and sidewalks on its upper deck; and a pair of tracks for the Bloor–Danforth subway line on its lower deck. The western span of the San Francisco–Oakland Bay Bridge also has two levels.
|
55 |
+
|
56 |
+
Robert Stephenson's High Level Bridge across the River Tyne in Newcastle upon Tyne, completed in 1849, is an early example of a double-decked bridge. The upper level carries a railway, and the lower level is used for road traffic. Other examples include Britannia Bridge over the Menai Strait and Craigavon Bridge in Derry, Northern Ireland. The Oresund Bridge between Copenhagen and Malmö consists of a four-lane highway on the upper level and a pair of railway tracks at the lower level. Tower Bridge in London is different example of a double-decked bridge, with the central section consisting of a low-level bascule span and a high-level footbridge.
|
57 |
+
|
58 |
+
A viaduct is made up of multiple bridges connected into one longer structure. The longest and some of the highest bridges are viaducts, such as the Lake Pontchartrain Causeway and Millau Viaduct.
|
59 |
+
|
60 |
+
A multi-way bridge has three or more separate spans which meet near the center of the bridge. Multi-way bridges with only three spans appear as a "T" or "Y" when viewed from above. Multi-way bridges are extremely rare. The Tridge, Margaret Bridge, and Zanesville Y-Bridge are examples.
|
61 |
+
|
62 |
+
|
63 |
+
|
64 |
+
A bridge can be categorized by what it is designed to carry, such as trains, pedestrian or road traffic (road bridge), a pipeline or waterway for water transport or barge traffic. An aqueduct is a bridge that carries water, resembling a viaduct, which is a bridge that connects points of equal height. A road-rail bridge carries both road and rail traffic. Overway is a term for a bridge that separates incompatible intersecting traffic, especially road and rail.[37] A bridge can carry overhead power lines as does the Storstrøm Bridge.[citation needed]
|
65 |
+
|
66 |
+
Some bridges accommodate other purposes, such as the tower of Nový Most Bridge in Bratislava, which features a restaurant, or a bridge-restaurant which is a bridge built to serve as a restaurant. Other suspension bridge towers carry transmission antennas.[citation needed]
|
67 |
+
|
68 |
+
Conservationists use wildlife overpasses to reduce habitat fragmentation and animal-vehicle collisions. The first animal bridges sprung up in France in the 1950s, and these types of bridges are now used worldwide to protect both large and small wildlife.[38][39][40]
|
69 |
+
|
70 |
+
Bridges are subject to unplanned uses as well. The areas underneath some bridges have become makeshift shelters and homes to homeless people, and the undertimbers of bridges all around the world are spots of prevalent graffiti. Some bridges attract people attempting suicide, and become known as suicide bridges.[citation needed][41]
|
71 |
+
|
72 |
+
The materials used to build the structure are also used to categorize bridges. Until the end of the 18th century, bridges were made out of timber, stone and masonry. Modern bridges are currently built in concrete, steel, fiber reinforced polymers (FRP), stainless steel or combinations of those materials. Living bridges have been constructed of live plants such as Ficus elastica tree roots in India[42] and wisteria vines in Japan.[43]
|
73 |
+
|
74 |
+
The Tank bridge transporter (TBT) has the same cross-country performance as a tank even when fully loaded. It can deploy, drop off and load bridges independently, but it cannot recover them.
|
75 |
+
|
76 |
+
Unlike buildings whose design is led by architects, bridges are usually designed by engineers. This follows from the importance of the engineering requirements; namely spanning the obstacle and having the durability to survive, with minimal maintenance, in an aggressive outdoor environment.[33] Bridges are first analysed; the bending moment and shear force distributions are calculated due to the applied loads. For this, the finite element method is the most popular. The analysis can be one, two or three-dimensional. For the majority of bridges, a two-dimensional plate model (often with stiffening beams) is sufficient or an upstand finite element model.[48] On completion of the analysis, the bridge is designed to resist the applied bending moments and shear forces, section sizes are selected with sufficient capacity to resist the stresses. Many bridges are made of prestressed concrete which has good durability properties, either by pre-tensioning of beams prior to installation or post-tensioning on site.
|
77 |
+
|
78 |
+
In most countries, bridges, like other structures, are designed according to Load and Resistance Factor Design (LRFD) principles. In simple terms, this means that the load is factored up by a factor greater than unity, while the resistance or capacity of the structure is factored down, by a factor less than unity. The effect of the factored load (stress, bending moment) should be less than the factored resistance to that effect. Both of these factors allow for uncertainty and are greater when the uncertainty is greater.
|
79 |
+
|
80 |
+
Most bridges are utilitarian in appearance, but in some cases, the appearance of the bridge can have great importance.[49] Often, this is the case with a large bridge that serves as an entrance to a city, or crosses over a main harbor entrance. These are sometimes known as signature bridges. Designers of bridges in parks and along parkways often place more importance to aesthetics, as well. Examples include the stone-faced bridges along the Taconic State Parkway in New York.
|
81 |
+
|
82 |
+
To create a beautiful image, some bridges are built much taller than necessary. This type, often found in east-Asian style gardens, is called a Moon bridge, evoking a rising full moon. Other garden bridges may cross only a dry bed of stream washed pebbles, intended only to convey an impression of a stream. Often in palaces a bridge will be built over an artificial waterway as symbolic of a passage to an important place or state of mind. A set of five bridges cross a sinuous waterway in an important courtyard of the Forbidden City in Beijing, China. The central bridge was reserved exclusively for the use of the Emperor and Empress, with their attendants.
|
83 |
+
|
84 |
+
Bridge maintenance consisting of a combination of structural health monitoring and testing. This is regulated in country-specific engineer standards and includes an ongoing monitoring every three to six months, a simple test or inspection every two to three years and a major inspection every six to ten years. In Europe, the cost of maintenance is considerable[32] and is higher in some countries than spending on new bridges. The lifetime of welded steel bridges can be significantly extended by aftertreatment of the weld transitions. This results in a potential high benefit, using existing bridges far beyond the planned lifetime.
|
85 |
+
|
86 |
+
While the response of a bridge to the applied loading is well understood, the applied traffic loading itself is still the subject of research.[50] This is a statistical problem as loading is highly variable, particularly for road bridges. Load Effects in bridges (stresses, bending moments) are designed for using the principles of Load and Resistance Factor Design. Before factoring to allow for uncertainty, the load effect is generally considered to be the maximum characteristic value in a specified return period. Notably, in Europe, it is the maximum value expected in 1000 years.
|
87 |
+
|
88 |
+
Bridge standards generally include a load model, deemed to represent the characteristic maximum load to be expected in the return period. In the past, these load models were agreed by standard drafting committees of experts but today, this situation is changing. It is now possible to measure the components of bridge traffic load, to weigh trucks, using weigh-in-motion (WIM) technologies. With extensive WIM databases, it is possible to calculate the maximum expected load effect in the specified return period. This is an active area of research, addressing issues of opposing direction lanes,[51][52] side-by-side (same direction) lanes,[53][54] traffic growth,[55] permit/non-permit vehicles[56] and long-span bridges (see below). Rather than repeat this complex process every time a bridge is to be designed, standards authorities specify simplified notional load models, notably HL-93,[57][58] intended to give the same load effects as the characteristic maximum values. The Eurocode is an example of a standard for bridge traffic loading that was developed in this way.[59]
|
89 |
+
|
90 |
+
Most bridge standards are only applicable for short and medium spans[60] - for example, the Eurocode is only applicable for loaded lengths up to 200 m. Longer spans are dealt with on a case by case basis. It is generally accepted that the intensity of load reduces as span increases because the probability of many trucks being closely spaced and extremely heavy reduces as the number of trucks involved increases. It is also generally assumed that short spans are governed by a small number of trucks traveling at high speed, with an allowance for dynamics. Longer spans on the other hand, are governed by congested traffic and no allowance for dynamics is needed. Calculating the loading due to congested traffic remains a challenge as there is a paucity of data on inter-vehicle gaps, both within-lane and inter-lane, in congested conditions. Weigh-in-Motion (WIM) systems provide data on inter-vehicle gaps but only operate well in free flowing traffic conditions. Some authors have used cameras to measure gaps and vehicle lengths in jammed situations and have inferred weights from lengths using WIM data.[61] Others have used microsimulation to generate typical clusters of vehicles on the bridge.[62][63][64]
|
91 |
+
|
92 |
+
Bridges vibrate under load and this contributes, to a greater or lesser extent, to the stresses.[33] Vibration and dynamics are generally more significant for slender structures such as pedestrian bridges and long-span road or rail bridges. One of the most famous examples is the Tacoma Narrows Bridge that collapsed shortly after being constructed due to excessive vibration. More recently, the Millennium Bridge in London vibrated excessively under pedestrian loading and was closed and retrofitted with a system of dampers. For smaller bridges, dynamics is not catastrophic but can contribute an added amplification to the stresses due to static effects. For example, the Eurocode for bridge loading specifies amplifications of between 10% and 70%, depending on the span, the number of traffic lanes and the type of stress (bending moment or shear force).[65]
|
93 |
+
|
94 |
+
There have been many studies of the dynamic interaction between vehicles and bridges during vehicle crossing events. Fryba[66] did pioneering work on the interaction of a moving load and an Euler-Bernoulli beam. With increased computing power, vehicle-bridge interaction (VBI) models have become ever more sophisticated.[67][68][69][70] The concern is that one of the many natural frequencies associated with the vehicle will resonate with the bridge first natural frequency.[71] The vehicle-related frequencies include body bounce and axle hop but there are also pseudo-frequencies associated with the vehicle's speed of crossing[72] and there are many frequencies associated with the surface profile.[50] Given the wide variety of heavy vehicles on road bridges, a statistical approach has been suggested, with VBI analyses carried out for many statically extreme loading events.[73]
|
95 |
+
|
96 |
+
The failure of bridges is of special concern for structural engineers in trying to learn lessons vital to bridge design, construction and maintenance. The failure of bridges first assumed national interest during the Victorian era when many new designs were being built, often using new materials.
|
97 |
+
|
98 |
+
In the United States, the National Bridge Inventory tracks the structural evaluations of all bridges, including designations such as "structurally deficient" and "functionally obsolete".
|
99 |
+
|
100 |
+
There are several methods used to monitor the condition of large structures like bridges. Many long-span bridges are now routinely monitored with a range of sensors. Many types of sensors are used, including strain transducers, accelerometers,[74] tiltmeters, and GPS. Accelerometers have the advantage that they are inertial, i.e., they do not require a reference point to measure from. This is often a problem for distance or deflection measurement, especially if the bridge is over water.
|
101 |
+
|
102 |
+
An option for structural-integrity monitoring is "non-contact monitoring", which uses the Doppler effect (Doppler shift). A laser beam from a Laser Doppler Vibrometer is directed at the point of interest, and the vibration amplitude and frequency are extracted from the Doppler shift of the laser beam frequency due to the motion of the surface.[75] The advantage of this method is that the setup time for the equipment is faster and, unlike an accelerometer, this makes measurements possible on multiple structures in as short a time as possible. Additionally, this method can measure specific points on a bridge that might be difficult to access. However, vibrometers are relatively expensive and have the disadvantage that a reference point is needed to measure from.
|
103 |
+
|
104 |
+
Snapshots in time of the external condition of a bridge can be recorded using Lidar to aid bridge inspection.[76] This can provide measurement of the bridge geometry (to facilitate the building of a computer model) but the accuracy is generally insufficient to measure bridge deflections under load.
|
105 |
+
|
106 |
+
While larger modern bridges are routinely monitored electronically, smaller bridges are generally inspected visually by trained inspectors. There is considerable research interest in the challenge of smaller bridges as they are often remote and do not have electrical power on site. Possible solutions are the installation of sensors on a specialist inspection vehicle and the use of its measurements as it drives over the bridge to infer information about the bridge condition.[77][78][79] These vehicles can be equipped with accelerometers, gyrometers, Laser Doppler Vibrometers[80][81] and some even have the capability to apply a resonant force to the road surface in order to dynamically excite the bridge at its resonant frequency.
|
en/4728.html.txt
ADDED
@@ -0,0 +1,11 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Oviparous animals are animals that lay their eggs, with little or no other embryonic development within the mother. This is the reproductive method of most fish, amphibians, reptiles, birds, and the monotremes.
|
2 |
+
|
3 |
+
In traditional usage, most insects (one being Culex pipiens, or the common house mosquito), molluscs, and arachnids are also described as oviparous.
|
4 |
+
|
5 |
+
The traditional modes of reproduction include oviparity, taken to be the ancestral condition, traditionally where either unfertilised oocytes or fertilised eggs are spawned, and viviparity traditionally including any mechanism where young are born live, or where the development of the young is supported by either parent in or on any part of their body.[1]
|
6 |
+
|
7 |
+
However, the biologist Thierry Lodé recently divided the traditional category of oviparous reproduction into two modes that he named ovuliparity and (true) oviparity respectively. He distinguished the two on the basis of the relationship between the zygote (fertilised egg) and the parents :[1][2]
|
8 |
+
|
9 |
+
In all but special cases of both ovuliparity and oviparity the overwhelming source of nourishment for the embryo is the yolk material deposited in the egg by the reproductive system of the mother (the vitellogenesis); offspring that depend on yolk in this manner are said to be lecithotrophic (opposed to matrotrophic), which literally means "feeding on yolk".
|
10 |
+
|
11 |
+
Distinguishing between the definitions of oviparity and ovuliparity necessarily reduces the number of species whose modes of reproduction are classified as oviparous, as they no longer include the ovuliparous species such as most fish, most frogs and many invertebrates. Such classifications are largely for convenience and as such can be important in practice, but speaking loosely in contexts in which the distinction is not relevant, it is common to lump both categories together as "oviparous".
|
en/4729.html.txt
ADDED
@@ -0,0 +1,176 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
The Black Sea is a body of water and marginal sea of the Atlantic Ocean between Eastern Europe, the Caucasus, and Western Asia.[1] It is supplied by a number of major rivers, including the Danube, Dnieper, Southern Bug, Dniester, Don, and the Rioni. The watersheds of many countries drain into the Black Sea beyond the six that immediately border it.[2]
|
4 |
+
|
5 |
+
The Black Sea has an area of 436,400 km2 (168,500 sq mi) (not including the Sea of Azov),[3] a maximum depth of 2,212 m (7,257 ft),[4] and a volume of 547,000 km3 (131,000 cu mi).[5] It is constrained by the Pontic Mountains to the south, Caucasus Mountains to the east, Crimean Mountains to the north, Strandzha to the southwest, Balkan Mountains to the west, Dobrogea Plateau to the northwest, and features a wide shelf to the northwest.
|
6 |
+
|
7 |
+
The longest east–west extent is about 1,175 km (730 mi).[6] Important cities along the coast include Odessa, Sevastopol, Samsun, and Istanbul.
|
8 |
+
|
9 |
+
The Black Sea is bordered by Ukraine, Romania, Bulgaria, Turkey, Georgia, and Russia. It has a positive water balance with an annual net outflow of 300 km3 (72 cu mi) per year through the Bosporus and the Dardanelles into the Aegean Sea.[citation needed] While the net flow of water through the Bosporus and Dardanelles (known collectively as the Turkish Straits) is out of the Black Sea, generally water is flowing in both directions simultaneously. Denser, more saline water from the Aegean flows into the Black Sea underneath the less dense, fresher outflowing water from the Black Sea. This creates a significant and permanent layer of deep water which does not drain or mix and is therefore anoxic. This anoxic layer is responsible for the preservation of ancient shipwrecks which have been found in the Black sea.
|
10 |
+
|
11 |
+
The Black Sea ultimately drains into the Mediterranean Sea, via the Turkish Straits and the Aegean Sea. The Bosporus Strait connects it to the small Sea of Marmara which in turn is connected to the Aegean Sea via the Strait of the Dardanelles. To the north the Black Sea is connected to the Sea of Azov by the Kerch Strait.
|
12 |
+
|
13 |
+
The water level has varied significantly over geological time. Due to these variations in the water level in the basin, the surrounding shelf and associated aprons have sometimes been dry land. At certain critical water levels, connections with surrounding water bodies can become established. It is through the most active of these connective routes, the Turkish Straits, that the Black Sea joins the world ocean. During geological periods when this hydrological link was not present, the Black Sea was an endorheic basin, operating independently of the global ocean system (similar to the Caspian Sea today). Currently, the Black Sea water level is relatively high; thus, water is being exchanged with the Mediterranean. The Turkish Straits connect the Black Sea with the Aegean Sea and comprise the Bosporus, the Sea of Marmara, and the Dardanelles. The Black Sea undersea river is a current of particularly saline water flowing through the Bosporus Strait and along the seabed of the Black Sea, the first of its kind discovered.
|
14 |
+
|
15 |
+
The International Hydrographic Organization defines the limits of the Black Sea as follows:[7]
|
16 |
+
|
17 |
+
On the Southwest. The Northeastern limit of the Sea of Marmara [A line joining Cape Rumili with Cape Anatoli (41°13'N)].
|
18 |
+
In the Kertch Strait. A line joining Cape Takil and Cape Panaghia (45°02'N).
|
19 |
+
|
20 |
+
Istanbul
|
21 |
+
Odessa
|
22 |
+
|
23 |
+
Samsun
|
24 |
+
Constanța
|
25 |
+
|
26 |
+
The Black Sea is divided into two depositional basins—the Western Black Sea and Eastern Black Sea—separated by the Mid-Black Sea High, which includes the Andrusov Ridge, Tetyaev High, and Archangelsky High, extending south from the Crimean Peninsula.
|
27 |
+
The basin includes two distinct relict back-arc basins which were initiated by the splitting of an Albian volcanic arc and the subduction of both the Paleo- and Neo-Tethys Oceans, but the timings of these events remain uncertain. Arc volcanism and extension occurred as the Neo-Tethys Ocean subducted under the southern margin of Laurasia during the Mesozoic. Uplift and compressional deformation took place as the Neotethys continued to close. Seismic surveys indicate that rifting began in the Western Black Sea in the Barremian and Aptian followed by the formation of oceanic crust 20 million years later in the Santonian.[14][15][16] Since its initiation, compressional tectonic environments led to subsidence in the basin, interspersed with extensional phases resulting in large-scale volcanism and numerous orogenies, causing the uplift of the Greater Caucasus, Pontides, Southern Crimean Peninsula and Balkanides mountain ranges.[17]
|
28 |
+
|
29 |
+
During the Messinian salinity crisis in the neighboring Mediterranean Sea, water levels fell but without drying up the sea. [18]
|
30 |
+
|
31 |
+
The ongoing collision between the Eurasian and African plates and westward escape of the Anatolian block along the North Anatolian Fault and East Anatolian Faults dictates the current tectonic regime,[17] which features enhanced subsidence in the Black Sea basin and significant volcanic activity in the Anatolian region.[19] These geological mechanisms, in the long term, have caused the periodic isolations of the Black Sea from the rest of the global ocean system.
|
32 |
+
|
33 |
+
The large shelf to the north of the basin is up to 190 km (120 mi) wide and features a shallow apron with gradients between 1:40 and 1:1000. The southern edge around Turkey and the eastern edge around Georgia, however, are typified by a narrow shelf that rarely exceeds 20 km (12 mi) in width and a steep apron that is typically 1:40 gradient with numerous submarine canyons and channel extensions. The Euxine abyssal plain in the centre of the Black Sea reaches a maximum depth of 2,212 metres (7,257.22 feet) just south of Yalta on the Crimean Peninsula.[20]
|
34 |
+
|
35 |
+
The littoral zone of the Black Sea is often referred to as the Pontic littoral or Pontic zone.[21]
|
36 |
+
|
37 |
+
The area surrounding the Black Sea is commonly referred to as the Black Sea Region. Its northern part lies within the Chernozem belt (black soil belt) which goes from eastern Croatia (Slavonia), along the Danube (northern Serbia, northern Bulgaria (Danubian Plain) and southern Romania (Wallachian Plain)) to northeast Ukraine and further across the Central Black Earth Region and southern Russia into Siberia.[22]
|
38 |
+
|
39 |
+
The Paleo-Euxinian is described by the accumulation of eolian silt deposits (related to the Riss glaciation) and the lowering of sea levels (MIS 6, 8 and 10). The Karangat marine transgression occurred during the Eemian Interglacial (MIS 5e). This may have been the highest sea levels reached in the late Pleistocene. Based on this some scholars have suggested that the Crimean Peninsula was isolated from the mainland by a shallow strait during the Eemian Interglacial.[23]
|
40 |
+
|
41 |
+
The Neoeuxinian transgression began with an inflow of waters from the Caspian Sea. Neoeuxinian deposits are found in the Black Sea below -20 m water depth in three layers. The upper layers correspond with the peak of the Khvalinian transgression, on the shelf shallow-water sands and coquina mixed with silty sands and brackish-water fauna, and inside the Black Sea Depression hydrotroilite silts. The middle layers on the shelf are sands with brackish-water mollusc shells. Of continental origin, the lower level on the shelf is mostly alluvial sands with pebbles, mixed with less common lacustrine silts and freshwater mollusc shells. Inside the Black Sea Depression they are terrigenous non-carbonate silts, and at the foot of the continental slope turbidite sediments.[24]
|
42 |
+
|
43 |
+
The Black Sea contains oil and natural gas resources but exploration in the sea is incomplete. As of 2017, 20 wells are in place. Throughout much of its existence, the Black Sea has had significant oil and gas-forming potential because of significant inflows of sediment and nutrient-rich waters. However, this varies geographically. For example, prospects are poorer off the coast of Bulgaria because of the large influx of sediment from the Danube River which obscured sunlight and diluted organic-rich sediments. Many of the discoveries to date have taken place offshore of Romania in the Western Black Sea and only a few discoveries have been made in the Eastern Black Sea.
|
44 |
+
|
45 |
+
During the Eocene, the Paratethys Ocean was partially isolated and sea levels fell. During this time sand shed off the rising Balkanide, Pontide and Caucasus mountains trapped organic material in the Maykop Suite of rocks through the Oligocene and early Miocene. Natural gas appears in rocks deposited in the Miocene and Pliocene by the paleo-Dnieper and pale-Dniester rivers, or in deep-water Oligocene-age rocks. Serious exploration began in 1999 with two deep-water wells, Limanköy-1 and Limanköy-2, drilled in Turkish waters. Next, the HPX (Hopa)-1 deepwater well targeted late Miocene sandstone units in Achara-Trialet fold belt (also known as the Gurian fold belt) along the Georgia-Turkey maritime border. Although geologists inferred that these rocks might have hydrocarbons that migrated from the Maykop Suite, the well was unsuccessful. No more drilling happened for five years after the HPX-1 well. Then in 2010, Sinop-1 targeted carbonate reservoirs potentially charged from the nearby Maykop Suite on the Andrusov Ridge, but the well-struck only Cretaceous volcanic rocks. Yassihöyük-1 encountered similar problems. Other Turkish wells, Sürmene-1 and Sile-1 drilled in the Eastern Black Sea in 2011 and 2015 respectively tested four-way closures above Cretaceous volcanoes, with no results in either case.
|
46 |
+
A different Turkish well, Kastamonu-1 drilled in 2011 did successfully find thermogenic gas in Pliocene and Miocene shale-cored anticlines in the Western Black Sea. A year later in 2012, Romania drilled Domino-1 which struck gas prompting the drilling of other wells in the Neptun Deep. In 2016, the Bulgarian well Polshkov-1 targeted Maykop Suite sandstones in the Polshkov High and Russia is in the process of drilling Jurassic carbonates on the Shatsky Ridge as of 2018. [25]
|
47 |
+
|
48 |
+
Current names of the sea are usually equivalents of the English name "Black Sea", including these given in the countries bordering the sea:[26]
|
49 |
+
|
50 |
+
Such names have not yet been shown conclusively to predate the 13th century.[27]
|
51 |
+
|
52 |
+
In Greece, the historical name "Euxine Sea" holds a different meaning (see below), is still widely used:
|
53 |
+
|
54 |
+
The principal Greek name "Póntos Áxeinos" is generally accepted to be a rendering of Iranian word *axšaina- ("dark colored"), compare Avestan axšaēna- ("dark colored"), Old Persian axšaina- (color of turquoise), Middle Persian axšēn/xašēn ("blue"), and New Persian xašīn ("blue"), as well as Ossetic œxsīn ("dark gray").[27] The ancient Greeks, most likely those living to the north of the Black Sea, subsequently adopted the name and altered it to á-xe(i)nos.[27] Thereafter, Greek tradition refers to the Black Sea as the "Inhospitable Sea", Πόντος Ἄξεινος Póntos Áxeinos, which is first attested in Pindar (c. 475 BC).[27] The name was considered to be "ominous" and was later changed into the euphemistic name "Hospitable sea", Εὔξεινος Πόντος Eúxeinos Póntos, which was also for the first time attested in Pindar.[27] This became the commonly used designation for the sea in Greek.[27] In contexts related to mythology, the older form Póntos Áxeinos remained favored.[27]
|
55 |
+
|
56 |
+
It has been erroneously suggested that the name was derived from the color of the water, or was at least related to climatic conditions.[27] Black or dark in this context, however, referred to a system in which colors represent the cardinal points of the known world.[27] Black or dark represented the north; red the south; white the west; and green or light blue for the east.[27] The symbolism based on cardinal points was used on multiple occasions and is therefore widely attested.[27] For example, the "Red Sea", a body of water reported since the time of Herodotus (c. 484–c. 425 BC) in fact designated the Indian Ocean, together with bodies of water now known as the Persian Gulf and the Red Sea.[27] According to the same explanation and reasoning, it is therefore considered to be impossible for the Scythians, who principally roamed in present-day Ukraine and Russia, to have given the designation because they lived to the north of the sea, which would, therefore, be a southern body of water for them.[27] The name could have only been given by people who were aware of both the northern "black/dark" and southern "red" seas; it is therefore considered probable it was given its name by the Achaemenids (550–330 BC).[27]
|
57 |
+
|
58 |
+
Strabo's Geographica (1.2.10) reports that in antiquity, the Black Sea was often simply called "the Sea" (ὁ πόντος ho Pontos). He also thought the Black Sea was called "inhospitable" before Greek colonization because it was difficult to navigate and because its shores were inhabited by savage tribes. (7.3.6) The name was changed to "hospitable" after the Milesians had colonized the Pontus region of the southern shoreline, making it part of Greek civilization.
|
59 |
+
|
60 |
+
In Greater Bundahishn, a sacred Zoroastrian text written in Middle Persian, the Black Sea is called Siyābun.[28] A 1570 map of Asia titled Asiae Nova Descriptio from Abraham Ortelius's Theatrum Orbis Terrarum labels the sea Mar Maggior ("Great Sea", compare Latin mare major).[29]
|
61 |
+
|
62 |
+
The lower layers of the sea have a high concentration of hydrogen sulfide. Hence anything, whether it be animals, dead plants, or metallic objects from ships, that stays down till 150 m or more gets covered by black sludge - another reason to call it the Black Sea.[30]
|
63 |
+
|
64 |
+
English-language writers of the 18th century often used the name Euxine Sea (/ˈjuːksɪn/ or /ˈjuːkˌsaɪn/) to refer to the Black Sea. Edward Gibbon, for instance, calls the sea by this name throughout The History of the Decline and Fall of the Roman Empire.[31] During the Ottoman Empire period, the Black Sea was called either Bahr-e Siyah or Karadeniz, both meaning "the Black Sea" in Ottoman Turkish.[citation needed]
|
65 |
+
|
66 |
+
In the tenth-century geography book Hudud al-'Alam, which was written in Persian by an unknown author, the Black Sea is called Georgian Sea, Sea of Georgians (daryā-yi Gurz).[32] The Georgian Chronicles used the name zğua sperisa (ზღუა სპერისა), which means the "Sea of Speri", named after the Kartvelian tribe Speris or Saspers.[33] The modern names of the Black Sea (Chyornoye more, Karadeniz, etc.), originated in the 13th century.[27]
|
67 |
+
|
68 |
+
The Black Sea is a marginal sea[34] and is the world's largest body of water with a meromictic basin.[35] The deep waters do not mix with the upper layers of water that receive oxygen from the atmosphere. As a result, over 90% of the deeper Black Sea volume is anoxic water.[36] The Black Sea's circulation patterns are primarily controlled by basin topography and fluvial inputs, which result in a strongly stratified vertical structure. Because of the extreme stratification, it is classified as a salt wedge estuary.
|
69 |
+
|
70 |
+
The Black Sea only experiences water transfer with the Mediterranean Sea, so all inflow and outflow occurs in the Bosporus and Dardanelles. Inflow from the Mediterranean has a higher salinity and density than the outflow, creating the classical estuarine circulation. This means that the inflow of dense water from the Mediterranean occurs at the bottom of the basin while the outflow of fresher Black Sea surface-water into the Marmara Sea occurs near the surface. The outflow is 16 000 m3/s (around 500 km3/year) and the inflow is 11 000 m3/s (around 350 km3/year), acc to Gregg (2002).[37]
|
71 |
+
|
72 |
+
The following water budget can be estimated:
|
73 |
+
|
74 |
+
The southern sill of the Bosporus is located at -36.5 m below present sea level (deepest spot of the shallowest cross-section in the Bosporus, located in front of Dolmabahçe Palace) and has a wet section of around 38 000 m2.[37] Inflow and outflow current speeds are averaged around 0.3 to 0.4 m/s, but much higher speeds are found locally, inducing significant turbulence and vertical shear. This allows for turbulent mixing of the two layers.[34] Surface water leaves the Black Sea with a salinity of 17 Practical salinity unit (PSU) and reaches the Mediterranean with a salinity of 34 PSU. Likewise, an inflow of the Mediterranean with salinity 38.5 PSU experiences a decrease to about 34 psu.[34]
|
75 |
+
|
76 |
+
Mean surface circulation is cyclonic and waters around the perimeter of the Black Sea circulate in a basin-wide shelfbreak gyre known as the Rim Current. The Rim Current has a maximum velocity of about 50–100 cm/s. Within this feature, two smaller cyclonic gyres operate, occupying the eastern and western sectors of the basin.[34] The Eastern and Western Gyres are well-organized systems in the winter but dissipate into a series of interconnected eddies in the summer and autumn. Mesoscale activity in the peripheral flow becomes more pronounced during these warmer seasons and is subject to interannual variability.
|
77 |
+
|
78 |
+
Outside of the Rim Current, numerous quasi-permanent coastal eddies are formed as a result of upwelling around the coastal apron and "wind curl" mechanisms. The intra-annual strength of these features is controlled by seasonal atmospheric and fluvial variations. During the spring, the Batumi eddy forms in the southeastern corner of the sea.[40]
|
79 |
+
|
80 |
+
Beneath the surface waters—from about 50–100 meters—there exists a halocline that stops at the Cold Intermediate Layer (CIL). This layer is composed of cool, salty surface waters, which are the result of localized atmospheric cooling and decreased fluvial input during the winter months. It is the remnant of the winter surface mixed layer.[34] The base of the CIL is marked by a major pycnocline at about 100–200 metres (330–660 ft) and this density disparity is the major mechanism for isolation of the deep water.
|
81 |
+
|
82 |
+
Below the pycnocline is the Deep Water mass, where salinity increases to 22.3 PSU and temperatures rise to around 8.9 °C.[34] The hydrochemical environment shifts from oxygenated to anoxic, as bacterial decomposition of sunken biomass utilizes all of the free oxygen. Weak geothermal heating and long residence time create a very thick convective bottom layer.[40]
|
83 |
+
|
84 |
+
The Black Sea undersea river is a current of particularly saline water flowing through the Bosporus Strait and along the seabed of the Black Sea. The discovery of the river announced on August 1, 2010, was made by scientists at the University of Leeds and is the first of its kind in the world.[41] The undersea river stems from salty water spilling through the Bosporus Strait from the Mediterranean Sea into the Black Sea, where the water has a lower salt content.[41]
|
85 |
+
|
86 |
+
Because of the anoxic water at depth, organic matter, including anthropogenic artifacts such as boat hulls, are well preserved. During periods of high surface productivity, short-lived algal blooms form organic rich layers known as sapropels. Scientists have reported an annual phytoplankton bloom that can be seen in many NASA images of the region.[42] As a result of these characteristics the Black Sea has gained interest from the field of marine archaeology as ancient shipwrecks in excellent states of preservation have been discovered, such as the Byzantine wreck Sinop D, located in the anoxic layer off the coast of Sinop, Turkey.
|
87 |
+
|
88 |
+
Modelling shows the release of the hydrogen sulfide clouds in the event of an asteroid impact into the Black Sea would pose a threat to health—or even life—for people living on the Black Sea coast.[43]
|
89 |
+
|
90 |
+
There have been isolated reports of flares on the Black Sea occurring during thunderstorms, possibly caused by lightning igniting combustible gas seeping up from the sea depths.[44]
|
91 |
+
|
92 |
+
The Black Sea supports an active and dynamic marine ecosystem, dominated by species suited to the brackish, nutrient-rich, conditions. As with all marine food webs, the Black Sea features a range of trophic groups, with autotrophic algae, including diatoms and dinoflagellates, acting as primary producers. The fluvial systems draining Eurasia and central Europe introduce large volumes of sediment and dissolved nutrients into the Black Sea, but the distribution of these nutrients is controlled by the degree of physiochemical stratification, which is, in turn, dictated by seasonal physiographic development.[45]
|
93 |
+
|
94 |
+
During winter, strong wind promotes convective overturning and upwelling of nutrients, while high summer temperatures result in a marked vertical stratification and a warm, shallow mixed layer.[46] Day length and insolation intensity also controls the extent of the photic zone. Subsurface productivity is limited by nutrient availability, as the anoxic bottom waters act as a sink for reduced nitrate, in the form of ammonia. The benthic zone also plays an important role in Black Sea nutrient cycling, as chemosynthetic organisms and anoxic geochemical pathways recycle nutrients which can be upwelled to the photic zone, enhancing productivity.[47]
|
95 |
+
|
96 |
+
In total, Black Sea's biodiversity contains around one-third of Mediterranean's and is experiencing natural and artificial invasions or Mediterranizations.[48][49]
|
97 |
+
|
98 |
+
The main phytoplankton groups present in the Black Sea are dinoflagellates, diatoms, coccolithophores and cyanobacteria. Generally, the annual cycle of phytoplankton development comprises significant diatom and dinoflagellate-dominated spring production, followed by a weaker mixed assemblage of community development below the seasonal thermocline during summer months and surface-intensified autumn production.[46][50] This pattern of productivity is also augmented by an Emiliania huxleyi bloom during the late spring and summer months.
|
99 |
+
|
100 |
+
Since the 1960s, rapid industrial expansion along the Black Sea coast line and the construction of a major dam has significantly increased annual variability in the N:P:Si ratio in the basin. In coastal areas, the biological effect of these changes has been an increase in the frequency of monospecific phytoplankton blooms, with diatom bloom frequency increasing by a factor of 2.5 and non-diatom bloom frequency increasing by a factor of 6. The non-diatoms, such as the prymnesiophytes Emiliania huxleyi (coccolithophore), Chromulina sp., and the Euglenophyte Eutreptia lanowii are able to out-compete diatom species because of the limited availability of Si, a necessary constituent of diatom frustules.[69] As a consequence of these blooms, benthic macrophyte populations were deprived of light, while anoxia caused mass mortality in marine animals.[70][71]
|
101 |
+
|
102 |
+
The decline in macrophytes was further compounded by overfishing during the 1970s, while the invasive ctenophore Mnemiopsis reduced the biomass of copepods and other zooplankton in the late 1980s. Additionally, an alien species—the warty comb jelly (Mnemiopsis leidyi)—was able to establish itself in the basin, exploding from a few individuals to estimated biomass of one billion metric tons.[72] The change in species composition in Black Sea waters also has consequences for hydrochemistry, as Ca-producing coccolithophores influence salinity and pH, although these ramifications have yet to be fully quantified. In central Black Sea waters, Si levels were also significantly reduced, due to a decrease in the flux of Si associated with advection across isopycnal surfaces. This phenomenon demonstrates the potential for localized alterations in Black Sea nutrient input to have basin-wide effects.
|
103 |
+
|
104 |
+
Pollution reduction and regulation efforts have led to a partial recovery of the Black Sea ecosystem during the 1990s, and an EU monitoring exercise, 'EROS21', revealed decreased N and P values, relative to the 1989 peak.[73] Recently, scientists have noted signs of ecological recovery, in part due to the construction of new sewage treatment plants in Slovakia, Hungary, Romania, and Bulgaria in connection with membership in the European Union. Mnemiopsis leidyi populations have been checked with the arrival of another alien species which feeds on them.[74]
|
105 |
+
|
106 |
+
Jellyfish
|
107 |
+
|
108 |
+
Actinia
|
109 |
+
|
110 |
+
Actinia
|
111 |
+
|
112 |
+
Goby
|
113 |
+
|
114 |
+
Stingray
|
115 |
+
|
116 |
+
Goat fish
|
117 |
+
|
118 |
+
Hermit crab, Diogenes pugilator
|
119 |
+
|
120 |
+
Blue sponge
|
121 |
+
|
122 |
+
Spiny dogfish
|
123 |
+
|
124 |
+
Seahorse
|
125 |
+
|
126 |
+
Black Sea Common Dolphins with a kite-surfer off Sochi
|
127 |
+
|
128 |
+
In the past, the range of the Asiatic lion extended from South Asia to the Balkans, possibly up to the Danube. Places like Turkey and the Trans-Caucasus were in this range. The Caspian tiger occurred in eastern Turkey and the Caucasus, at least. The lyuti zver (Old East Slavic for "fierce animal") that was encountered by Vladimir II Monomakh, Velikiy Kniaz of Kievan Rus' (which ranged to the Black Sea in the south),[75] may have been a tiger or leopard, rather than a wolf or lynx, due to the way it behaved towards him and his horse.[76]
|
129 |
+
|
130 |
+
Short-term climatic variation in the Black Sea region is significantly influenced by the operation of the North Atlantic oscillation, the climatic mechanisms resulting from the interaction between the north Atlantic and mid-latitude air masses.[77] While the exact mechanisms causing the North Atlantic Oscillation remain unclear,[78] it is thought the climate conditions established in western Europe mediate the heat and precipitation fluxes reaching Central Europe and Eurasia, regulating the formation of winter cyclones, which are largely responsible for regional precipitation inputs[79] and influence Mediterranean Sea Surface Temperatures (SST's).[80]
|
131 |
+
|
132 |
+
The relative strength of these systems also limits the amount of cold air arriving from northern regions during winter.[81] Other influencing factors include the regional topography, as depressions and storms systems arriving from the Mediterranean are funneled through the low land around the Bosporus, Pontic and Caucasus mountain ranges acting as waveguides, limiting the speed and paths of cyclones passing through the region.[82]
|
133 |
+
|
134 |
+
Some islands in the Black sea belong to Bulgaria, Romania, Turkey, and Ukraine:
|
135 |
+
|
136 |
+
The Black Sea is connected to the World Ocean by a chain of two shallow straits, the Dardanelles and the Bosporus. The Dardanelles is 55 m (180 ft) deep and the Bosporus is as shallow as 36 m (118 ft). By comparison, at the height of the last ice age, sea levels were more than 100 m (330 ft) lower than they are now.
|
137 |
+
|
138 |
+
There is also evidence that water levels in the Black Sea were considerably lower at some point during the post-glacial period. Some researchers theorize that the Black Sea had been a landlocked freshwater lake (at least in upper layers) during the last glaciation and for some time after.
|
139 |
+
|
140 |
+
In the aftermath of the last glacial period, water levels in the Black Sea and the Aegean Sea rose independently until they were high enough to exchange water. The exact timeline of this development is still subject to debate. One possibility is that the Black Sea filled first, with excess freshwater flowing over the Bosporus sill and eventually into the Mediterranean Sea. There are also catastrophic scenarios, such as the "Black Sea deluge theory" put forward by William Ryan, Walter Pitman and Petko Dimitrov.
|
141 |
+
|
142 |
+
The Black Sea deluge is a hypothesized catastrophic rise in the level of the Black Sea circa 5600 BC due to waters from the Mediterranean Sea breaching a sill in the Bosporus Strait. The hypothesis was headlined when The New York Times published it in December 1996, shortly before it was published in an academic journal.[83] While it is agreed that the sequence of events described did occur, there is debate over the suddenness, dating, and magnitude of the events. Relevant to the hypothesis is that its description has led some to connect this catastrophe with prehistoric flood myths.[84]
|
143 |
+
|
144 |
+
The Black Sea was a busy waterway on the crossroads of the ancient world: the Balkans to the west, the Eurasian steppes to the north, the Caucasus and Central Asia to the east, Asia Minor and Mesopotamia to the south, and Greece to the south-west.
|
145 |
+
|
146 |
+
The oldest processed gold in the world was found in Varna, Bulgaria, and Greek mythology portrays the Argonauts as sailing on the Black Sea. The land at the eastern end of the Black Sea, Colchis, (now Georgia), marked for the Greeks the edge of the known world.
|
147 |
+
|
148 |
+
The steppes to the north of the Black Sea have been suggested as the original homeland (Urheimat) of the speakers of the Proto-Indo-European language, (PIE) the progenitor of the Indo-European language family, by some scholars such as Marija Gimbutas; others move the homeland further east towards the Caspian Sea, yet others to Anatolia.
|
149 |
+
|
150 |
+
Greek presence in the Black Sea began at least as early as the 9th century BC with colonization the Black Sea's southern coast. By 500 BC, permanent Greek communities existed all around the Black Sea and a lucrative trade network connected the entirety of the Black Sea to the wider Mediterranean. While Greek colonies generally maintained very close cultural ties to their founding polis, Greek colonies in the Black Sea began to develop their own Black Sea Greek culture, know today as Pontic. The coastal community of Black Sea Greeks remained a prominent part of the Greek World for centuries.[85]
|
151 |
+
|
152 |
+
The Black Sea became a virtual Ottoman Navy lake within five years of Genoa losing the Crimean Peninsula in 1479, after which the only Western merchant vessels to sail its waters were those of Venice's old rival Ragusa. This restriction was challenged by the Russian Navy from 1783 until the relaxation of export controls in 1789 because of the French Revolution.[86][87]
|
153 |
+
|
154 |
+
The Black Sea was a significant naval theatre of World War I and saw both naval and land battles during World War II.
|
155 |
+
|
156 |
+
Ancient trade routes in the region are currently[when?] being extensively studied by scientists, as the Black Sea was sailed by Hittites, Carians, Colchians, Thracians, Greeks, Persians, Cimmerians, Scythians, Romans, Byzantines, Goths, Huns, Avars, Slavs, Varangians, Crusaders, Venetians, Genoese, Georgians, Tatars and Ottomans.
|
157 |
+
|
158 |
+
Perhaps the most promising areas in deepwater archaeology are the quest for submerged prehistoric settlements in the continental shelf and for ancient shipwrecks in the anoxic zone, which are expected to be exceptionally well preserved due to the absence of oxygen. This concentration of historical powers, combined with the preservative qualities of the deep anoxic waters of the Black Sea, has attracted increased interest from marine archaeologists who have begun to discover a large number of ancient ships and organic remains in a high state of preservation.
|
159 |
+
|
160 |
+
According to NATO, the Black sea is a strategic corridor that provides smuggling channels for moving legal and illegal goods including drugs, radioactive materials, and counterfeit goods that can be used to finance terrorism.[88]
|
161 |
+
|
162 |
+
According to the International Transport Workers' Federation 2013 study, there were at least 30 operating merchant seaports in the Black Sea (including at least 12 in Ukraine).[89]
|
163 |
+
|
164 |
+
According to the International Transport Workers' Federation 2013 study, there were around 2,400 commercial vessels operating in the Black Sea.[89]
|
165 |
+
|
166 |
+
Anchovy: the Turkish commercial fishing fleet catches around 300,000 tons per year on average, and fishery carried out mainly in winter and the highest portion of the stock is caught between November and December.[90]
|
167 |
+
|
168 |
+
Since the 1980s, the Soviet Union started offshore drilling for petroleum in the sea's western portion (adjoining Ukraine's coast). Independent Ukraine continued and intensified that effort within its exclusive economic zone, inviting major international oil companies for exploration. Discovery of the new, massive oilfields in the area stimulated an influx of foreign investments. It also provoked a short-term peaceful territorial dispute with Romania which was resolved in 2011 by an international court redefining the exclusive economic zones between the two countries.
|
169 |
+
|
170 |
+
In the years following the end of the Cold War, the popularity of the Black Sea as a tourist destination steadily increased. Tourism at Black Sea resorts became one of the region's growth industries.[91] The following is a list of notable Black Sea resort towns:
|
171 |
+
|
172 |
+
The 1936 Montreux Convention provides for free passage of civilian ships between the international waters of the Black and the Mediterranean Seas. However, a single country (Turkey) has complete control over the straits connecting the two seas. Military ships are separate categories from civilian ships and they can only pass through the straits if the ship is belonging to a Black Seapower. Other military ships have the right to pass through the straits if they are not in a war against Turkey and they can stay in the Black Sea basin for a limited time. The 1982 amendments to the Montreux Convention allow Turkey to close the Straits at its discretion in both wartime and peacetime.[93]
|
173 |
+
|
174 |
+
The 1936 Montreux Convention governs the passage of vessels between the Black, the Mediterranean and Aegean Seas and the presence of military vessels belonging to non-littoral states in the Black Sea waters.[94]
|
175 |
+
|
176 |
+
In December 2018, the Kerch Strait incident took place. The Russian Navy and Coast guard took control of three ships belonging to their counterparts. The ships were trying to enter the Black Sea[95]
|
en/473.html.txt
ADDED
@@ -0,0 +1,68 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
Autumn, also known as fall in North American English,[1] is one of the four temperate seasons. Autumn marks the transition from summer to winter, in September (Northern Hemisphere) or March (Southern Hemisphere), when the duration of daylight becomes noticeably shorter and the temperature cools considerably. One of its main features in temperate climates is the shedding of leaves from deciduous trees.
|
4 |
+
|
5 |
+
Some cultures regard the autumnal equinox as "mid-autumn", while others with a longer temperature lag treat it as the start of autumn.[2] Meteorologists (and most of the temperate countries in the southern hemisphere)[3] use a definition based on Gregorian calendar months, with autumn being September, October, and November in the northern hemisphere,[4] and March, April, and May in the southern hemisphere. Persians celebrate the beginning of the autumn as Mehregan to honor Mithra (Mehr).
|
6 |
+
|
7 |
+
In North America, autumn traditionally starts with the September equinox (21 to 24 September)[5] and ends with the winter solstice (21 or 22 December).[6] Popular culture in the United States associates Labor Day, the first Monday in September, as the end of summer and the start of autumn; certain summer traditions, such as wearing white, are discouraged after that date.[7] As daytime and nighttime temperatures decrease, trees change color and then shed their leaves.[8] In traditional East Asian solar term, autumn starts on or around 8 August and ends on or about 7 November. In Ireland, the autumn months according to the national meteorological service, Met Éireann, are September, October and November.[9] However, according to the Irish Calendar, which is based on ancient Gaelic traditions, autumn lasts throughout the months of August, September and October, or possibly a few days later, depending on tradition. In the Irish language, September is known as Meán Fómhair ("middle of autumn") and October as Deireadh Fómhair ("end of autumn").[10][11]
|
8 |
+
|
9 |
+
In southern hemisphere countries such as Australia[12] and New Zealand, which tend to base their seasonal calendars meteorologically rather than astronomically,[13] autumn officially begins on 1 March and ends on 31 May.
|
10 |
+
|
11 |
+
The word autumn /ˈɔːtəm/ is derived from Latin autumnus, archaic auctumnus, possibly from the ancient Etruscan root autu- and has within it connotations of the passing of the year.[14] Alternative etymologies include Proto-Indo-European *h₃ewǵ- (“cold”) or *h₂sows- (“dry”).[15]
|
12 |
+
|
13 |
+
After the Roman era, the word continued to be used as the Old French word autompne (automne in modern French) or autumpne in Middle English,[16] and was later normalised to the original Latin. In the Medieval period, there are rare examples of its use as early as the 12th century, but by the 16th century, it was in common use.
|
14 |
+
|
15 |
+
Before the 16th century, harvest was the term usually used to refer to the season, as it is common in other West Germanic languages to this day (cf. Dutch herfst, German Herbst and Scots hairst). However, as more people gradually moved from working the land to living in towns, the word harvest lost its reference to the time of year and came to refer only to the actual activity of reaping, and autumn, as well as fall, began to replace it as a reference to the season.[17][18]
|
16 |
+
|
17 |
+
The alternative word fall for the season traces its origins to old Germanic languages. The exact derivation is unclear, with the Old English fiæll or feallan and the Old Norse fall all being possible candidates. However, these words all have the meaning "to fall from a height" and are clearly derived either from a common root or from each other. The term came to denote the season in 16th-century England, a contraction of Middle English expressions like "fall of the leaf" and "fall of the year".[19] Compare the origin of spring from "spring of the leaf" and "spring of the year".[20]
|
18 |
+
|
19 |
+
During the 17th century, English emigration to the British colonies in North America was at its peak, and the new settlers took the English language with them. While the term fall gradually became nearly obsolete in Britain, it became the more common term in North America.[21]
|
20 |
+
|
21 |
+
The name backend, a once common name for the season in Northern England, has today been largely replaced by the name autumn.[22]
|
22 |
+
|
23 |
+
Association with the transition from warm to cold weather, and its related status as the season of the primary harvest, has dominated its themes and popular images. In Western cultures, personifications of autumn are usually pretty, well-fed females adorned with fruits, vegetables and grains that ripen at this time. Many cultures feature autumnal harvest festivals, often the most important on their calendars. Still extant echoes of these celebrations are found in the autumn Thanksgiving holiday of the United States and Canada, and the Jewish Sukkot holiday with its roots as a full-moon harvest festival of "tabernacles" (living in outdoor huts around the time of harvest). There are also the many North American Indian festivals tied to harvest of ripe foods gathered in the wild, the Chinese Mid-Autumn or Moon festival, and many others. The predominant mood of these autumnal celebrations is a gladness for the fruits of the earth mixed with a certain melancholy linked to the imminent arrival of harsh weather.
|
24 |
+
|
25 |
+
This view is presented in English poet John Keats' poem To Autumn, where he describes the season as a time of bounteous fecundity, a time of 'mellow fruitfulness'.
|
26 |
+
|
27 |
+
In North America, while most foods are harvested during the autumn, foods particularly associated with the season include pumpkins (which are integral parts of both Thanksgiving and Halloween) and apples, which are used to make the seasonal beverage apple cider.
|
28 |
+
|
29 |
+
Autumn, especially in poetry, has often been associated with melancholia. The possibilities and opportunities of summer are gone, and the chill of winter is on the horizon. Skies turn grey, the amount of usable daylight drops rapidly, and many people turn inward, both physically and mentally.[23] It has been referred to as an unhealthy season.[24]
|
30 |
+
|
31 |
+
Similar examples may be found in Irish poet William Butler Yeats' poem The Wild Swans at Coole where the maturing season that the poet observes symbolically represents his own ageing self. Like the natural world that he observes, he too has reached his prime and now must look forward to the inevitability of old age and death. French poet Paul Verlaine's "Chanson d'automne" ("Autumn Song") is likewise characterised by strong, painful feelings of sorrow. Keats' To Autumn, written in September 1819, echoes this sense of melancholic reflection, but also emphasises the lush abundance of the season. The song "Autumn Leaves", based on the French song "Les Feuilles mortes", uses the melancholic atmosphere of the season and the end of summer as a metaphor for the mood of being separated from a loved one.[25]
|
32 |
+
|
33 |
+
Autumn is associated with Halloween (influenced by Samhain, a Celtic autumn festival),[26] and with it a widespread marketing campaign that promotes it. Halloween, October 31, is in autumn in the northern hemisphere. The television, film, book, costume, home decoration, and confectionery industries use this time of year to promote products closely associated with such a holiday, with promotions going from late August or early September to 31 October, since their themes rapidly lose strength once the holiday ends, and advertising starts concentrating on Christmas.
|
34 |
+
|
35 |
+
In some parts of the northern hemisphere, autumn has a strong association with the end of summer holiday and the start of a new school year, particularly for children in primary and secondary education. "Back to School" advertising and preparations usually occurs in the weeks leading to the beginning of autumn.
|
36 |
+
|
37 |
+
Easter falls in autumn in the southern hemisphere.
|
38 |
+
|
39 |
+
Thanksgiving Day is a national holiday celebrated in Canada, in the United States, in some of the Caribbean islands and in Liberia. Thanksgiving is celebrated on the second Monday of October in Canada and on the fourth Thursday of November in the United States, and around the same part of the year in other places. Similarly named festival holidays occur in Germany and Japan.
|
40 |
+
|
41 |
+
Television stations and networks, particularly in North America, traditionally begin their regular seasons in their autumn, with new series and new episodes of existing series debuting mostly during late September or early October (series that debut outside the fall season are usually known as mid-season replacements). A sweeps period takes place in November to measure Nielsen Ratings.
|
42 |
+
|
43 |
+
American football is played almost exclusively in the autumn months; at the high school level, seasons run from late August through early November, with some playoff games and holiday rivalry contests being played as late as Thanksgiving. In many American states, the championship games take place in early December. College football's regular season runs from September through November, while the main professional circuit, the National Football League, plays from September through to early January. Summer sports, such as stock car racing, Canadian football, Major League Soccer, and Major League Baseball, wrap up their seasons in early to late autumn; MLB's championship World Series is known popularly as the "Fall Classic".[27] (Amateur baseball is usually finished by August.) Likewise, professional winter sports, such as ice hockey, basketball, and most leagues of association football in Europe, are in the early stages of their seasons during autumn; American college basketball and college ice hockey play teams outside their athletic conferences during the late autumn before their in-conference schedules begin in winter.
|
44 |
+
|
45 |
+
The Christian religious holidays of All Saints' Day and All Souls' Day are observed in autumn in the Northern hemisphere.
|
46 |
+
|
47 |
+
Since 1997, Autumn has been one of the top 100 names for girls in the United States.[28]
|
48 |
+
|
49 |
+
In Indian mythology, autumn is considered to be the preferred season for the goddess of learning Saraswati, who is also known by the name of "goddess of autumn" (Sharada).
|
50 |
+
|
51 |
+
In Asian mysticism, Autumn is associated with the element of metal, and subsequently with the colour white, the White Tiger of the West, and death and mourning.
|
52 |
+
|
53 |
+
Although colour change in leaves occurs wherever deciduous trees are found, coloured autumn foliage is noted in various regions of the world: most of North America, Eastern Asia (including China, Korea, and Japan), Europe, the forest of Patagonia, eastern Australia and New Zealand's South Island.
|
54 |
+
|
55 |
+
Eastern Canada and New England are famous for their autumnal foliage,[29][30] and this attracts major tourism (worth billions of US dollars) for the regions.[31][32]
|
56 |
+
|
57 |
+
Autumn, by Giuseppe Collignon
|
58 |
+
|
59 |
+
Autumn, by Pierre Le Gros the Elder
|
60 |
+
|
61 |
+
Autumn (1573), by Giuseppe Arcimboldo
|
62 |
+
|
63 |
+
Autumn (1896), by Art Nouveau artist Alphonse Mucha
|
64 |
+
|
65 |
+
Autumn (1871), by Currier & Ives
|
66 |
+
|
67 |
+
This 1905 print by Maxfield Frederick Parrish illustrated John Keats' poem Autumn
|
68 |
+
|
en/4730.html.txt
ADDED
@@ -0,0 +1,106 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
A bridge is a structure built to span a physical obstacle, such as a body of water, valley, or road, without closing the way underneath. It is constructed for the purpose of providing passage over the obstacle, usually something that can be detrimental to cross otherwise. There are many different designs that each serve a particular purpose and apply to different situations. Designs of bridges vary depending on the function of the bridge, the nature of the terrain where the bridge is constructed and anchored, the material used to make it, and the funds available to build it.
|
4 |
+
|
5 |
+
Most likely the earliest bridges were fallen trees and stepping stones, while Neolithic people built boardwalk bridges across marshland. The Arkadiko Bridge dating from the 13th century BC, in the Peloponnese, in southern Greece is one of the oldest arch bridges still in existence and use.
|
6 |
+
|
7 |
+
The Oxford English Dictionary traces the origin of the word bridge to an Old English word brycg, of the same meaning.[1] The word can be traced directly back to Proto-Indo-European *bʰrēw-. The word for the card game of the same name has a different origin.
|
8 |
+
|
9 |
+
The simplest type of a bridge is stepping stones, so this may have been one of the earliest types. Neolithic people also built a form of boardwalk across marshes, of which the Sweet Track and the Post Track, are examples from England that are around 6000 years old.[2] Undoubtedly ancient peoples would also have used log bridges; that is a timber bridge[3] that fall naturally or are intentionally felled or placed across streams. Some of the first man-made bridges with significant span were probably intentionally felled trees.[4]
|
10 |
+
|
11 |
+
Among the oldest timber bridges is the Holzbrücke Rapperswil-Hurden crossing upper Lake Zürich in Switzerland; the prehistoric timber piles discovered to the west of the Seedamm date back to 1523 BC. The first wooden footbridge led across Lake Zürich, followed by several reconstructions at least until the late 2nd century AD, when the Roman Empire built a 6-metre-wide (20 ft) wooden bridge. Between 1358 and 1360, Rudolf IV, Duke of Austria, built a 'new' wooden bridge across the lake that has been used to 1878 – measuring approximately 1,450 metres (4,760 ft) in length and 4 metres (13 ft) wide. On April 6, 2001, the reconstructed wooden footbridge was opened, being the longest wooden bridge in Switzerland.
|
12 |
+
|
13 |
+
The Arkadiko Bridge is one of four Mycenaean corbel arch bridges part of a former network of roads, designed to accommodate chariots, between the fort of Tiryns and town of Epidauros in the Peloponnese, in southern Greece. Dating to the Greek Bronze Age (13th century BC), it is one of the oldest arch bridges still in existence and use.
|
14 |
+
Several intact arched stone bridges from the Hellenistic era can be found in the Peloponnese.[5]
|
15 |
+
|
16 |
+
The greatest bridge builders of antiquity were the ancient Romans.[6] The Romans built arch bridges and aqueducts that could stand in conditions that would damage or destroy earlier designs. Some stand today.[7] An example is the Alcántara Bridge, built over the river Tagus, in Spain. The Romans also used cement, which reduced the variation of strength found in natural stone.[8] One type of cement, called pozzolana, consisted of water, lime, sand, and volcanic rock. Brick and mortar bridges were built after the Roman era, as the technology for cement was lost (then later rediscovered).
|
17 |
+
|
18 |
+
In India, the Arthashastra treatise by Kautilya mentions the construction of dams and bridges.[9] A Mauryan bridge near Girnar was surveyed by James Princep.[10] The bridge was swept away during a flood, and later repaired by Puspagupta, the chief architect of emperor Chandragupta I.[10] The use of stronger bridges using plaited bamboo and iron chain was visible in India by about the 4th century.[11] A number of bridges, both for military and commercial purposes, were constructed by the Mughal administration in India.[12]
|
19 |
+
|
20 |
+
Although large Chinese bridges of wooden construction existed at the time of the Warring States period, the oldest surviving stone bridge in China is the Zhaozhou Bridge, built from 595 to 605 AD during the Sui dynasty. This bridge is also historically significant as it is the world's oldest open-spandrel stone segmental arch bridge. European segmental arch bridges date back to at least the Alconétar Bridge (approximately 2nd century AD), while the enormous Roman era Trajan's Bridge (105 AD) featured open-spandrel segmental arches in wooden construction.[citation needed]
|
21 |
+
|
22 |
+
Rope bridges, a simple type of suspension bridge, were used by the Inca civilization in the Andes mountains of South America, just prior to European colonization in the 16th century.
|
23 |
+
|
24 |
+
During the 18th century there were many innovations in the design of timber bridges by Hans Ulrich Grubenmann, Johannes Grubenmann, and others. The first book on bridge engineering was written by Hubert Gautier in 1716.
|
25 |
+
|
26 |
+
A major breakthrough in bridge technology came with the erection of the Iron Bridge in Shropshire, England in 1779. It used cast iron for the first time as arches to cross the river Severn.[13] With the Industrial Revolution in the 19th century, truss systems of wrought iron were developed for larger bridges, but iron does not have the tensile strength to support large loads. With the advent of steel, which has a high tensile strength, much larger bridges were built, many using the ideas of Gustave Eiffel.[citation needed]
|
27 |
+
|
28 |
+
In Canada and the United States, numerous timber covered bridges were built in the late 1700s to the late 1800s, reminiscent of earlier designs in Germany and Switzerland. Some covered bridges were also built in Asia.[14] In later years, some were partly made of stone or metal but the trusses were usually still made of wood; in the United States, there were three styles of trusses, the Queen Post, the Burr Arch and the Town Lattice.[15] Hundreds of these structures still stand in North America. They were brought to the attention of the general public in the 1990s by the novel, movie, and play The Bridges of Madison County.[16][17]
|
29 |
+
|
30 |
+
In 1927 welding pioneer Stefan Bryła designed the first welded road bridge in the world, the Maurzyce Bridge which was later built across the river Słudwia at Maurzyce near Łowicz, Poland in 1929. In 1995, the American Welding Society presented the Historic Welded Structure Award for the bridge to Poland.[18]
|
31 |
+
|
32 |
+
Bridges can be categorized in several different ways. Common categories include the type of structural elements used, by what they carry, whether they are fixed or movable, and by the materials used.
|
33 |
+
|
34 |
+
Bridges may be classified by how the actions of tension, compression, bending, torsion and shear are distributed through their structure. Most bridges will employ all of these to some degree, but only a few will predominate. The separation of forces and moments may be quite clear. In a suspension or cable-stayed bridge, the elements in tension are distinct in shape and placement. In other cases the forces may be distributed among a large number of members, as in a truss.
|
35 |
+
|
36 |
+
The world's longest beam bridge is Lake Pontchartrain Causeway in southern Louisiana in the United States, at 23.83 miles (38.35 km), with individual spans of 56 feet (17 m).[21] Beam bridges are the simplest and oldest type of bridge in use today,[22] and are a popular type.[23]
|
37 |
+
|
38 |
+
Some cantilever bridges also have a smaller beam connecting the two cantilevers, for extra strength.
|
39 |
+
|
40 |
+
The largest cantilever bridge is the 549-metre (1,801 ft) Quebec Bridge in Quebec, Canada.
|
41 |
+
|
42 |
+
With the span of 220 metres (720 ft), the Solkan Bridge over the Soča River at Solkan in Slovenia is the second-largest stone bridge in the world and the longest railroad stone bridge. It was completed in 1905. Its arch, which was constructed from over 5,000 tonnes (4,900 long tons; 5,500 short tons) of stone blocks in just 18 days, is the second-largest stone arch in the world, surpassed only by the Friedensbrücke (Syratalviadukt) in Plauen, and the largest railroad stone arch. The arch of the Friedensbrücke, which was built in the same year, has the span of 90 m (295 ft) and crosses the valley of the Syrabach River. The difference between the two is that the Solkan Bridge was built from stone blocks, whereas the Friedensbrücke was built from a mixture of crushed stone and cement mortar.[24]
|
43 |
+
|
44 |
+
The world's largest arch bridge is the Chaotianmen Bridge over the Yangtze River with a length of 1,741 m (5,712 ft) and a span of 552 m (1,811 ft). The bridge was opened April 29, 2009, in Chongqing, China.[25]
|
45 |
+
|
46 |
+
The longest suspension bridge in the world is the 3,909 m (12,825 ft) Akashi Kaikyō Bridge in Japan.[27]
|
47 |
+
|
48 |
+
The longest cable-stayed bridge since 2012 is the 1,104 m (3,622 ft) Russky Bridge in Vladivostok, Russia.[31]
|
49 |
+
|
50 |
+
Some Engineers sub-divide 'beam' bridges into slab, beam-and-slab and box girder on the basis of their cross-section.[32] A slab can be solid or voided (though this is no longer favored for inspectability reasons) while beam-and-slab consists of concrete or steel girders connected by a concrete slab.[33] A box-girder cross-section consists of a single-cell or multi-cellular box. In recent years, integral bridge construction has also become popular.
|
51 |
+
|
52 |
+
Most bridges are fixed bridges, meaning they have no moving parts and stay in one place until they fail or are demolished. Temporary bridges, such as Bailey bridges, are designed to be assembled, and taken apart, transported to a different site, and re-used. They are important in military engineering, and are also used to carry traffic while an old bridge is being rebuilt. Movable bridges are designed to move out of the way of boats or other kinds of traffic, which would otherwise be too tall to fit. These are generally electrically powered.[34]
|
53 |
+
|
54 |
+
Double-decked (or double-decker) bridges have two levels, such as the George Washington Bridge, connecting New York City to Bergen County, New Jersey, US, as the world's busiest bridge, carrying 102 million vehicles annually;[35][36] truss work between the roadway levels provided stiffness to the roadways and reduced movement of the upper level when the lower level was installed three decades after the upper level. The Tsing Ma Bridge and Kap Shui Mun Bridge in Hong Kong have six lanes on their upper decks, and on their lower decks there are two lanes and a pair of tracks for MTR metro trains. Some double-decked bridges only use one level for street traffic; the Washington Avenue Bridge in Minneapolis reserves its lower level for automobile and light rail traffic and its upper level for pedestrian and bicycle traffic (predominantly students at the University of Minnesota). Likewise, in Toronto, the Prince Edward Viaduct has five lanes of motor traffic, bicycle lanes, and sidewalks on its upper deck; and a pair of tracks for the Bloor–Danforth subway line on its lower deck. The western span of the San Francisco–Oakland Bay Bridge also has two levels.
|
55 |
+
|
56 |
+
Robert Stephenson's High Level Bridge across the River Tyne in Newcastle upon Tyne, completed in 1849, is an early example of a double-decked bridge. The upper level carries a railway, and the lower level is used for road traffic. Other examples include Britannia Bridge over the Menai Strait and Craigavon Bridge in Derry, Northern Ireland. The Oresund Bridge between Copenhagen and Malmö consists of a four-lane highway on the upper level and a pair of railway tracks at the lower level. Tower Bridge in London is different example of a double-decked bridge, with the central section consisting of a low-level bascule span and a high-level footbridge.
|
57 |
+
|
58 |
+
A viaduct is made up of multiple bridges connected into one longer structure. The longest and some of the highest bridges are viaducts, such as the Lake Pontchartrain Causeway and Millau Viaduct.
|
59 |
+
|
60 |
+
A multi-way bridge has three or more separate spans which meet near the center of the bridge. Multi-way bridges with only three spans appear as a "T" or "Y" when viewed from above. Multi-way bridges are extremely rare. The Tridge, Margaret Bridge, and Zanesville Y-Bridge are examples.
|
61 |
+
|
62 |
+
|
63 |
+
|
64 |
+
A bridge can be categorized by what it is designed to carry, such as trains, pedestrian or road traffic (road bridge), a pipeline or waterway for water transport or barge traffic. An aqueduct is a bridge that carries water, resembling a viaduct, which is a bridge that connects points of equal height. A road-rail bridge carries both road and rail traffic. Overway is a term for a bridge that separates incompatible intersecting traffic, especially road and rail.[37] A bridge can carry overhead power lines as does the Storstrøm Bridge.[citation needed]
|
65 |
+
|
66 |
+
Some bridges accommodate other purposes, such as the tower of Nový Most Bridge in Bratislava, which features a restaurant, or a bridge-restaurant which is a bridge built to serve as a restaurant. Other suspension bridge towers carry transmission antennas.[citation needed]
|
67 |
+
|
68 |
+
Conservationists use wildlife overpasses to reduce habitat fragmentation and animal-vehicle collisions. The first animal bridges sprung up in France in the 1950s, and these types of bridges are now used worldwide to protect both large and small wildlife.[38][39][40]
|
69 |
+
|
70 |
+
Bridges are subject to unplanned uses as well. The areas underneath some bridges have become makeshift shelters and homes to homeless people, and the undertimbers of bridges all around the world are spots of prevalent graffiti. Some bridges attract people attempting suicide, and become known as suicide bridges.[citation needed][41]
|
71 |
+
|
72 |
+
The materials used to build the structure are also used to categorize bridges. Until the end of the 18th century, bridges were made out of timber, stone and masonry. Modern bridges are currently built in concrete, steel, fiber reinforced polymers (FRP), stainless steel or combinations of those materials. Living bridges have been constructed of live plants such as Ficus elastica tree roots in India[42] and wisteria vines in Japan.[43]
|
73 |
+
|
74 |
+
The Tank bridge transporter (TBT) has the same cross-country performance as a tank even when fully loaded. It can deploy, drop off and load bridges independently, but it cannot recover them.
|
75 |
+
|
76 |
+
Unlike buildings whose design is led by architects, bridges are usually designed by engineers. This follows from the importance of the engineering requirements; namely spanning the obstacle and having the durability to survive, with minimal maintenance, in an aggressive outdoor environment.[33] Bridges are first analysed; the bending moment and shear force distributions are calculated due to the applied loads. For this, the finite element method is the most popular. The analysis can be one, two or three-dimensional. For the majority of bridges, a two-dimensional plate model (often with stiffening beams) is sufficient or an upstand finite element model.[48] On completion of the analysis, the bridge is designed to resist the applied bending moments and shear forces, section sizes are selected with sufficient capacity to resist the stresses. Many bridges are made of prestressed concrete which has good durability properties, either by pre-tensioning of beams prior to installation or post-tensioning on site.
|
77 |
+
|
78 |
+
In most countries, bridges, like other structures, are designed according to Load and Resistance Factor Design (LRFD) principles. In simple terms, this means that the load is factored up by a factor greater than unity, while the resistance or capacity of the structure is factored down, by a factor less than unity. The effect of the factored load (stress, bending moment) should be less than the factored resistance to that effect. Both of these factors allow for uncertainty and are greater when the uncertainty is greater.
|
79 |
+
|
80 |
+
Most bridges are utilitarian in appearance, but in some cases, the appearance of the bridge can have great importance.[49] Often, this is the case with a large bridge that serves as an entrance to a city, or crosses over a main harbor entrance. These are sometimes known as signature bridges. Designers of bridges in parks and along parkways often place more importance to aesthetics, as well. Examples include the stone-faced bridges along the Taconic State Parkway in New York.
|
81 |
+
|
82 |
+
To create a beautiful image, some bridges are built much taller than necessary. This type, often found in east-Asian style gardens, is called a Moon bridge, evoking a rising full moon. Other garden bridges may cross only a dry bed of stream washed pebbles, intended only to convey an impression of a stream. Often in palaces a bridge will be built over an artificial waterway as symbolic of a passage to an important place or state of mind. A set of five bridges cross a sinuous waterway in an important courtyard of the Forbidden City in Beijing, China. The central bridge was reserved exclusively for the use of the Emperor and Empress, with their attendants.
|
83 |
+
|
84 |
+
Bridge maintenance consisting of a combination of structural health monitoring and testing. This is regulated in country-specific engineer standards and includes an ongoing monitoring every three to six months, a simple test or inspection every two to three years and a major inspection every six to ten years. In Europe, the cost of maintenance is considerable[32] and is higher in some countries than spending on new bridges. The lifetime of welded steel bridges can be significantly extended by aftertreatment of the weld transitions. This results in a potential high benefit, using existing bridges far beyond the planned lifetime.
|
85 |
+
|
86 |
+
While the response of a bridge to the applied loading is well understood, the applied traffic loading itself is still the subject of research.[50] This is a statistical problem as loading is highly variable, particularly for road bridges. Load Effects in bridges (stresses, bending moments) are designed for using the principles of Load and Resistance Factor Design. Before factoring to allow for uncertainty, the load effect is generally considered to be the maximum characteristic value in a specified return period. Notably, in Europe, it is the maximum value expected in 1000 years.
|
87 |
+
|
88 |
+
Bridge standards generally include a load model, deemed to represent the characteristic maximum load to be expected in the return period. In the past, these load models were agreed by standard drafting committees of experts but today, this situation is changing. It is now possible to measure the components of bridge traffic load, to weigh trucks, using weigh-in-motion (WIM) technologies. With extensive WIM databases, it is possible to calculate the maximum expected load effect in the specified return period. This is an active area of research, addressing issues of opposing direction lanes,[51][52] side-by-side (same direction) lanes,[53][54] traffic growth,[55] permit/non-permit vehicles[56] and long-span bridges (see below). Rather than repeat this complex process every time a bridge is to be designed, standards authorities specify simplified notional load models, notably HL-93,[57][58] intended to give the same load effects as the characteristic maximum values. The Eurocode is an example of a standard for bridge traffic loading that was developed in this way.[59]
|
89 |
+
|
90 |
+
Most bridge standards are only applicable for short and medium spans[60] - for example, the Eurocode is only applicable for loaded lengths up to 200 m. Longer spans are dealt with on a case by case basis. It is generally accepted that the intensity of load reduces as span increases because the probability of many trucks being closely spaced and extremely heavy reduces as the number of trucks involved increases. It is also generally assumed that short spans are governed by a small number of trucks traveling at high speed, with an allowance for dynamics. Longer spans on the other hand, are governed by congested traffic and no allowance for dynamics is needed. Calculating the loading due to congested traffic remains a challenge as there is a paucity of data on inter-vehicle gaps, both within-lane and inter-lane, in congested conditions. Weigh-in-Motion (WIM) systems provide data on inter-vehicle gaps but only operate well in free flowing traffic conditions. Some authors have used cameras to measure gaps and vehicle lengths in jammed situations and have inferred weights from lengths using WIM data.[61] Others have used microsimulation to generate typical clusters of vehicles on the bridge.[62][63][64]
|
91 |
+
|
92 |
+
Bridges vibrate under load and this contributes, to a greater or lesser extent, to the stresses.[33] Vibration and dynamics are generally more significant for slender structures such as pedestrian bridges and long-span road or rail bridges. One of the most famous examples is the Tacoma Narrows Bridge that collapsed shortly after being constructed due to excessive vibration. More recently, the Millennium Bridge in London vibrated excessively under pedestrian loading and was closed and retrofitted with a system of dampers. For smaller bridges, dynamics is not catastrophic but can contribute an added amplification to the stresses due to static effects. For example, the Eurocode for bridge loading specifies amplifications of between 10% and 70%, depending on the span, the number of traffic lanes and the type of stress (bending moment or shear force).[65]
|
93 |
+
|
94 |
+
There have been many studies of the dynamic interaction between vehicles and bridges during vehicle crossing events. Fryba[66] did pioneering work on the interaction of a moving load and an Euler-Bernoulli beam. With increased computing power, vehicle-bridge interaction (VBI) models have become ever more sophisticated.[67][68][69][70] The concern is that one of the many natural frequencies associated with the vehicle will resonate with the bridge first natural frequency.[71] The vehicle-related frequencies include body bounce and axle hop but there are also pseudo-frequencies associated with the vehicle's speed of crossing[72] and there are many frequencies associated with the surface profile.[50] Given the wide variety of heavy vehicles on road bridges, a statistical approach has been suggested, with VBI analyses carried out for many statically extreme loading events.[73]
|
95 |
+
|
96 |
+
The failure of bridges is of special concern for structural engineers in trying to learn lessons vital to bridge design, construction and maintenance. The failure of bridges first assumed national interest during the Victorian era when many new designs were being built, often using new materials.
|
97 |
+
|
98 |
+
In the United States, the National Bridge Inventory tracks the structural evaluations of all bridges, including designations such as "structurally deficient" and "functionally obsolete".
|
99 |
+
|
100 |
+
There are several methods used to monitor the condition of large structures like bridges. Many long-span bridges are now routinely monitored with a range of sensors. Many types of sensors are used, including strain transducers, accelerometers,[74] tiltmeters, and GPS. Accelerometers have the advantage that they are inertial, i.e., they do not require a reference point to measure from. This is often a problem for distance or deflection measurement, especially if the bridge is over water.
|
101 |
+
|
102 |
+
An option for structural-integrity monitoring is "non-contact monitoring", which uses the Doppler effect (Doppler shift). A laser beam from a Laser Doppler Vibrometer is directed at the point of interest, and the vibration amplitude and frequency are extracted from the Doppler shift of the laser beam frequency due to the motion of the surface.[75] The advantage of this method is that the setup time for the equipment is faster and, unlike an accelerometer, this makes measurements possible on multiple structures in as short a time as possible. Additionally, this method can measure specific points on a bridge that might be difficult to access. However, vibrometers are relatively expensive and have the disadvantage that a reference point is needed to measure from.
|
103 |
+
|
104 |
+
Snapshots in time of the external condition of a bridge can be recorded using Lidar to aid bridge inspection.[76] This can provide measurement of the bridge geometry (to facilitate the building of a computer model) but the accuracy is generally insufficient to measure bridge deflections under load.
|
105 |
+
|
106 |
+
While larger modern bridges are routinely monitored electronically, smaller bridges are generally inspected visually by trained inspectors. There is considerable research interest in the challenge of smaller bridges as they are often remote and do not have electrical power on site. Possible solutions are the installation of sensors on a specialist inspection vehicle and the use of its measurements as it drives over the bridge to infer information about the bridge condition.[77][78][79] These vehicles can be equipped with accelerometers, gyrometers, Laser Doppler Vibrometers[80][81] and some even have the capability to apply a resonant force to the road surface in order to dynamically excite the bridge at its resonant frequency.
|
en/4731.html.txt
ADDED
@@ -0,0 +1,106 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
A bridge is a structure built to span a physical obstacle, such as a body of water, valley, or road, without closing the way underneath. It is constructed for the purpose of providing passage over the obstacle, usually something that can be detrimental to cross otherwise. There are many different designs that each serve a particular purpose and apply to different situations. Designs of bridges vary depending on the function of the bridge, the nature of the terrain where the bridge is constructed and anchored, the material used to make it, and the funds available to build it.
|
4 |
+
|
5 |
+
Most likely the earliest bridges were fallen trees and stepping stones, while Neolithic people built boardwalk bridges across marshland. The Arkadiko Bridge dating from the 13th century BC, in the Peloponnese, in southern Greece is one of the oldest arch bridges still in existence and use.
|
6 |
+
|
7 |
+
The Oxford English Dictionary traces the origin of the word bridge to an Old English word brycg, of the same meaning.[1] The word can be traced directly back to Proto-Indo-European *bʰrēw-. The word for the card game of the same name has a different origin.
|
8 |
+
|
9 |
+
The simplest type of a bridge is stepping stones, so this may have been one of the earliest types. Neolithic people also built a form of boardwalk across marshes, of which the Sweet Track and the Post Track, are examples from England that are around 6000 years old.[2] Undoubtedly ancient peoples would also have used log bridges; that is a timber bridge[3] that fall naturally or are intentionally felled or placed across streams. Some of the first man-made bridges with significant span were probably intentionally felled trees.[4]
|
10 |
+
|
11 |
+
Among the oldest timber bridges is the Holzbrücke Rapperswil-Hurden crossing upper Lake Zürich in Switzerland; the prehistoric timber piles discovered to the west of the Seedamm date back to 1523 BC. The first wooden footbridge led across Lake Zürich, followed by several reconstructions at least until the late 2nd century AD, when the Roman Empire built a 6-metre-wide (20 ft) wooden bridge. Between 1358 and 1360, Rudolf IV, Duke of Austria, built a 'new' wooden bridge across the lake that has been used to 1878 – measuring approximately 1,450 metres (4,760 ft) in length and 4 metres (13 ft) wide. On April 6, 2001, the reconstructed wooden footbridge was opened, being the longest wooden bridge in Switzerland.
|
12 |
+
|
13 |
+
The Arkadiko Bridge is one of four Mycenaean corbel arch bridges part of a former network of roads, designed to accommodate chariots, between the fort of Tiryns and town of Epidauros in the Peloponnese, in southern Greece. Dating to the Greek Bronze Age (13th century BC), it is one of the oldest arch bridges still in existence and use.
|
14 |
+
Several intact arched stone bridges from the Hellenistic era can be found in the Peloponnese.[5]
|
15 |
+
|
16 |
+
The greatest bridge builders of antiquity were the ancient Romans.[6] The Romans built arch bridges and aqueducts that could stand in conditions that would damage or destroy earlier designs. Some stand today.[7] An example is the Alcántara Bridge, built over the river Tagus, in Spain. The Romans also used cement, which reduced the variation of strength found in natural stone.[8] One type of cement, called pozzolana, consisted of water, lime, sand, and volcanic rock. Brick and mortar bridges were built after the Roman era, as the technology for cement was lost (then later rediscovered).
|
17 |
+
|
18 |
+
In India, the Arthashastra treatise by Kautilya mentions the construction of dams and bridges.[9] A Mauryan bridge near Girnar was surveyed by James Princep.[10] The bridge was swept away during a flood, and later repaired by Puspagupta, the chief architect of emperor Chandragupta I.[10] The use of stronger bridges using plaited bamboo and iron chain was visible in India by about the 4th century.[11] A number of bridges, both for military and commercial purposes, were constructed by the Mughal administration in India.[12]
|
19 |
+
|
20 |
+
Although large Chinese bridges of wooden construction existed at the time of the Warring States period, the oldest surviving stone bridge in China is the Zhaozhou Bridge, built from 595 to 605 AD during the Sui dynasty. This bridge is also historically significant as it is the world's oldest open-spandrel stone segmental arch bridge. European segmental arch bridges date back to at least the Alconétar Bridge (approximately 2nd century AD), while the enormous Roman era Trajan's Bridge (105 AD) featured open-spandrel segmental arches in wooden construction.[citation needed]
|
21 |
+
|
22 |
+
Rope bridges, a simple type of suspension bridge, were used by the Inca civilization in the Andes mountains of South America, just prior to European colonization in the 16th century.
|
23 |
+
|
24 |
+
During the 18th century there were many innovations in the design of timber bridges by Hans Ulrich Grubenmann, Johannes Grubenmann, and others. The first book on bridge engineering was written by Hubert Gautier in 1716.
|
25 |
+
|
26 |
+
A major breakthrough in bridge technology came with the erection of the Iron Bridge in Shropshire, England in 1779. It used cast iron for the first time as arches to cross the river Severn.[13] With the Industrial Revolution in the 19th century, truss systems of wrought iron were developed for larger bridges, but iron does not have the tensile strength to support large loads. With the advent of steel, which has a high tensile strength, much larger bridges were built, many using the ideas of Gustave Eiffel.[citation needed]
|
27 |
+
|
28 |
+
In Canada and the United States, numerous timber covered bridges were built in the late 1700s to the late 1800s, reminiscent of earlier designs in Germany and Switzerland. Some covered bridges were also built in Asia.[14] In later years, some were partly made of stone or metal but the trusses were usually still made of wood; in the United States, there were three styles of trusses, the Queen Post, the Burr Arch and the Town Lattice.[15] Hundreds of these structures still stand in North America. They were brought to the attention of the general public in the 1990s by the novel, movie, and play The Bridges of Madison County.[16][17]
|
29 |
+
|
30 |
+
In 1927 welding pioneer Stefan Bryła designed the first welded road bridge in the world, the Maurzyce Bridge which was later built across the river Słudwia at Maurzyce near Łowicz, Poland in 1929. In 1995, the American Welding Society presented the Historic Welded Structure Award for the bridge to Poland.[18]
|
31 |
+
|
32 |
+
Bridges can be categorized in several different ways. Common categories include the type of structural elements used, by what they carry, whether they are fixed or movable, and by the materials used.
|
33 |
+
|
34 |
+
Bridges may be classified by how the actions of tension, compression, bending, torsion and shear are distributed through their structure. Most bridges will employ all of these to some degree, but only a few will predominate. The separation of forces and moments may be quite clear. In a suspension or cable-stayed bridge, the elements in tension are distinct in shape and placement. In other cases the forces may be distributed among a large number of members, as in a truss.
|
35 |
+
|
36 |
+
The world's longest beam bridge is Lake Pontchartrain Causeway in southern Louisiana in the United States, at 23.83 miles (38.35 km), with individual spans of 56 feet (17 m).[21] Beam bridges are the simplest and oldest type of bridge in use today,[22] and are a popular type.[23]
|
37 |
+
|
38 |
+
Some cantilever bridges also have a smaller beam connecting the two cantilevers, for extra strength.
|
39 |
+
|
40 |
+
The largest cantilever bridge is the 549-metre (1,801 ft) Quebec Bridge in Quebec, Canada.
|
41 |
+
|
42 |
+
With the span of 220 metres (720 ft), the Solkan Bridge over the Soča River at Solkan in Slovenia is the second-largest stone bridge in the world and the longest railroad stone bridge. It was completed in 1905. Its arch, which was constructed from over 5,000 tonnes (4,900 long tons; 5,500 short tons) of stone blocks in just 18 days, is the second-largest stone arch in the world, surpassed only by the Friedensbrücke (Syratalviadukt) in Plauen, and the largest railroad stone arch. The arch of the Friedensbrücke, which was built in the same year, has the span of 90 m (295 ft) and crosses the valley of the Syrabach River. The difference between the two is that the Solkan Bridge was built from stone blocks, whereas the Friedensbrücke was built from a mixture of crushed stone and cement mortar.[24]
|
43 |
+
|
44 |
+
The world's largest arch bridge is the Chaotianmen Bridge over the Yangtze River with a length of 1,741 m (5,712 ft) and a span of 552 m (1,811 ft). The bridge was opened April 29, 2009, in Chongqing, China.[25]
|
45 |
+
|
46 |
+
The longest suspension bridge in the world is the 3,909 m (12,825 ft) Akashi Kaikyō Bridge in Japan.[27]
|
47 |
+
|
48 |
+
The longest cable-stayed bridge since 2012 is the 1,104 m (3,622 ft) Russky Bridge in Vladivostok, Russia.[31]
|
49 |
+
|
50 |
+
Some Engineers sub-divide 'beam' bridges into slab, beam-and-slab and box girder on the basis of their cross-section.[32] A slab can be solid or voided (though this is no longer favored for inspectability reasons) while beam-and-slab consists of concrete or steel girders connected by a concrete slab.[33] A box-girder cross-section consists of a single-cell or multi-cellular box. In recent years, integral bridge construction has also become popular.
|
51 |
+
|
52 |
+
Most bridges are fixed bridges, meaning they have no moving parts and stay in one place until they fail or are demolished. Temporary bridges, such as Bailey bridges, are designed to be assembled, and taken apart, transported to a different site, and re-used. They are important in military engineering, and are also used to carry traffic while an old bridge is being rebuilt. Movable bridges are designed to move out of the way of boats or other kinds of traffic, which would otherwise be too tall to fit. These are generally electrically powered.[34]
|
53 |
+
|
54 |
+
Double-decked (or double-decker) bridges have two levels, such as the George Washington Bridge, connecting New York City to Bergen County, New Jersey, US, as the world's busiest bridge, carrying 102 million vehicles annually;[35][36] truss work between the roadway levels provided stiffness to the roadways and reduced movement of the upper level when the lower level was installed three decades after the upper level. The Tsing Ma Bridge and Kap Shui Mun Bridge in Hong Kong have six lanes on their upper decks, and on their lower decks there are two lanes and a pair of tracks for MTR metro trains. Some double-decked bridges only use one level for street traffic; the Washington Avenue Bridge in Minneapolis reserves its lower level for automobile and light rail traffic and its upper level for pedestrian and bicycle traffic (predominantly students at the University of Minnesota). Likewise, in Toronto, the Prince Edward Viaduct has five lanes of motor traffic, bicycle lanes, and sidewalks on its upper deck; and a pair of tracks for the Bloor–Danforth subway line on its lower deck. The western span of the San Francisco–Oakland Bay Bridge also has two levels.
|
55 |
+
|
56 |
+
Robert Stephenson's High Level Bridge across the River Tyne in Newcastle upon Tyne, completed in 1849, is an early example of a double-decked bridge. The upper level carries a railway, and the lower level is used for road traffic. Other examples include Britannia Bridge over the Menai Strait and Craigavon Bridge in Derry, Northern Ireland. The Oresund Bridge between Copenhagen and Malmö consists of a four-lane highway on the upper level and a pair of railway tracks at the lower level. Tower Bridge in London is different example of a double-decked bridge, with the central section consisting of a low-level bascule span and a high-level footbridge.
|
57 |
+
|
58 |
+
A viaduct is made up of multiple bridges connected into one longer structure. The longest and some of the highest bridges are viaducts, such as the Lake Pontchartrain Causeway and Millau Viaduct.
|
59 |
+
|
60 |
+
A multi-way bridge has three or more separate spans which meet near the center of the bridge. Multi-way bridges with only three spans appear as a "T" or "Y" when viewed from above. Multi-way bridges are extremely rare. The Tridge, Margaret Bridge, and Zanesville Y-Bridge are examples.
|
61 |
+
|
62 |
+
|
63 |
+
|
64 |
+
A bridge can be categorized by what it is designed to carry, such as trains, pedestrian or road traffic (road bridge), a pipeline or waterway for water transport or barge traffic. An aqueduct is a bridge that carries water, resembling a viaduct, which is a bridge that connects points of equal height. A road-rail bridge carries both road and rail traffic. Overway is a term for a bridge that separates incompatible intersecting traffic, especially road and rail.[37] A bridge can carry overhead power lines as does the Storstrøm Bridge.[citation needed]
|
65 |
+
|
66 |
+
Some bridges accommodate other purposes, such as the tower of Nový Most Bridge in Bratislava, which features a restaurant, or a bridge-restaurant which is a bridge built to serve as a restaurant. Other suspension bridge towers carry transmission antennas.[citation needed]
|
67 |
+
|
68 |
+
Conservationists use wildlife overpasses to reduce habitat fragmentation and animal-vehicle collisions. The first animal bridges sprung up in France in the 1950s, and these types of bridges are now used worldwide to protect both large and small wildlife.[38][39][40]
|
69 |
+
|
70 |
+
Bridges are subject to unplanned uses as well. The areas underneath some bridges have become makeshift shelters and homes to homeless people, and the undertimbers of bridges all around the world are spots of prevalent graffiti. Some bridges attract people attempting suicide, and become known as suicide bridges.[citation needed][41]
|
71 |
+
|
72 |
+
The materials used to build the structure are also used to categorize bridges. Until the end of the 18th century, bridges were made out of timber, stone and masonry. Modern bridges are currently built in concrete, steel, fiber reinforced polymers (FRP), stainless steel or combinations of those materials. Living bridges have been constructed of live plants such as Ficus elastica tree roots in India[42] and wisteria vines in Japan.[43]
|
73 |
+
|
74 |
+
The Tank bridge transporter (TBT) has the same cross-country performance as a tank even when fully loaded. It can deploy, drop off and load bridges independently, but it cannot recover them.
|
75 |
+
|
76 |
+
Unlike buildings whose design is led by architects, bridges are usually designed by engineers. This follows from the importance of the engineering requirements; namely spanning the obstacle and having the durability to survive, with minimal maintenance, in an aggressive outdoor environment.[33] Bridges are first analysed; the bending moment and shear force distributions are calculated due to the applied loads. For this, the finite element method is the most popular. The analysis can be one, two or three-dimensional. For the majority of bridges, a two-dimensional plate model (often with stiffening beams) is sufficient or an upstand finite element model.[48] On completion of the analysis, the bridge is designed to resist the applied bending moments and shear forces, section sizes are selected with sufficient capacity to resist the stresses. Many bridges are made of prestressed concrete which has good durability properties, either by pre-tensioning of beams prior to installation or post-tensioning on site.
|
77 |
+
|
78 |
+
In most countries, bridges, like other structures, are designed according to Load and Resistance Factor Design (LRFD) principles. In simple terms, this means that the load is factored up by a factor greater than unity, while the resistance or capacity of the structure is factored down, by a factor less than unity. The effect of the factored load (stress, bending moment) should be less than the factored resistance to that effect. Both of these factors allow for uncertainty and are greater when the uncertainty is greater.
|
79 |
+
|
80 |
+
Most bridges are utilitarian in appearance, but in some cases, the appearance of the bridge can have great importance.[49] Often, this is the case with a large bridge that serves as an entrance to a city, or crosses over a main harbor entrance. These are sometimes known as signature bridges. Designers of bridges in parks and along parkways often place more importance to aesthetics, as well. Examples include the stone-faced bridges along the Taconic State Parkway in New York.
|
81 |
+
|
82 |
+
To create a beautiful image, some bridges are built much taller than necessary. This type, often found in east-Asian style gardens, is called a Moon bridge, evoking a rising full moon. Other garden bridges may cross only a dry bed of stream washed pebbles, intended only to convey an impression of a stream. Often in palaces a bridge will be built over an artificial waterway as symbolic of a passage to an important place or state of mind. A set of five bridges cross a sinuous waterway in an important courtyard of the Forbidden City in Beijing, China. The central bridge was reserved exclusively for the use of the Emperor and Empress, with their attendants.
|
83 |
+
|
84 |
+
Bridge maintenance consisting of a combination of structural health monitoring and testing. This is regulated in country-specific engineer standards and includes an ongoing monitoring every three to six months, a simple test or inspection every two to three years and a major inspection every six to ten years. In Europe, the cost of maintenance is considerable[32] and is higher in some countries than spending on new bridges. The lifetime of welded steel bridges can be significantly extended by aftertreatment of the weld transitions. This results in a potential high benefit, using existing bridges far beyond the planned lifetime.
|
85 |
+
|
86 |
+
While the response of a bridge to the applied loading is well understood, the applied traffic loading itself is still the subject of research.[50] This is a statistical problem as loading is highly variable, particularly for road bridges. Load Effects in bridges (stresses, bending moments) are designed for using the principles of Load and Resistance Factor Design. Before factoring to allow for uncertainty, the load effect is generally considered to be the maximum characteristic value in a specified return period. Notably, in Europe, it is the maximum value expected in 1000 years.
|
87 |
+
|
88 |
+
Bridge standards generally include a load model, deemed to represent the characteristic maximum load to be expected in the return period. In the past, these load models were agreed by standard drafting committees of experts but today, this situation is changing. It is now possible to measure the components of bridge traffic load, to weigh trucks, using weigh-in-motion (WIM) technologies. With extensive WIM databases, it is possible to calculate the maximum expected load effect in the specified return period. This is an active area of research, addressing issues of opposing direction lanes,[51][52] side-by-side (same direction) lanes,[53][54] traffic growth,[55] permit/non-permit vehicles[56] and long-span bridges (see below). Rather than repeat this complex process every time a bridge is to be designed, standards authorities specify simplified notional load models, notably HL-93,[57][58] intended to give the same load effects as the characteristic maximum values. The Eurocode is an example of a standard for bridge traffic loading that was developed in this way.[59]
|
89 |
+
|
90 |
+
Most bridge standards are only applicable for short and medium spans[60] - for example, the Eurocode is only applicable for loaded lengths up to 200 m. Longer spans are dealt with on a case by case basis. It is generally accepted that the intensity of load reduces as span increases because the probability of many trucks being closely spaced and extremely heavy reduces as the number of trucks involved increases. It is also generally assumed that short spans are governed by a small number of trucks traveling at high speed, with an allowance for dynamics. Longer spans on the other hand, are governed by congested traffic and no allowance for dynamics is needed. Calculating the loading due to congested traffic remains a challenge as there is a paucity of data on inter-vehicle gaps, both within-lane and inter-lane, in congested conditions. Weigh-in-Motion (WIM) systems provide data on inter-vehicle gaps but only operate well in free flowing traffic conditions. Some authors have used cameras to measure gaps and vehicle lengths in jammed situations and have inferred weights from lengths using WIM data.[61] Others have used microsimulation to generate typical clusters of vehicles on the bridge.[62][63][64]
|
91 |
+
|
92 |
+
Bridges vibrate under load and this contributes, to a greater or lesser extent, to the stresses.[33] Vibration and dynamics are generally more significant for slender structures such as pedestrian bridges and long-span road or rail bridges. One of the most famous examples is the Tacoma Narrows Bridge that collapsed shortly after being constructed due to excessive vibration. More recently, the Millennium Bridge in London vibrated excessively under pedestrian loading and was closed and retrofitted with a system of dampers. For smaller bridges, dynamics is not catastrophic but can contribute an added amplification to the stresses due to static effects. For example, the Eurocode for bridge loading specifies amplifications of between 10% and 70%, depending on the span, the number of traffic lanes and the type of stress (bending moment or shear force).[65]
|
93 |
+
|
94 |
+
There have been many studies of the dynamic interaction between vehicles and bridges during vehicle crossing events. Fryba[66] did pioneering work on the interaction of a moving load and an Euler-Bernoulli beam. With increased computing power, vehicle-bridge interaction (VBI) models have become ever more sophisticated.[67][68][69][70] The concern is that one of the many natural frequencies associated with the vehicle will resonate with the bridge first natural frequency.[71] The vehicle-related frequencies include body bounce and axle hop but there are also pseudo-frequencies associated with the vehicle's speed of crossing[72] and there are many frequencies associated with the surface profile.[50] Given the wide variety of heavy vehicles on road bridges, a statistical approach has been suggested, with VBI analyses carried out for many statically extreme loading events.[73]
|
95 |
+
|
96 |
+
The failure of bridges is of special concern for structural engineers in trying to learn lessons vital to bridge design, construction and maintenance. The failure of bridges first assumed national interest during the Victorian era when many new designs were being built, often using new materials.
|
97 |
+
|
98 |
+
In the United States, the National Bridge Inventory tracks the structural evaluations of all bridges, including designations such as "structurally deficient" and "functionally obsolete".
|
99 |
+
|
100 |
+
There are several methods used to monitor the condition of large structures like bridges. Many long-span bridges are now routinely monitored with a range of sensors. Many types of sensors are used, including strain transducers, accelerometers,[74] tiltmeters, and GPS. Accelerometers have the advantage that they are inertial, i.e., they do not require a reference point to measure from. This is often a problem for distance or deflection measurement, especially if the bridge is over water.
|
101 |
+
|
102 |
+
An option for structural-integrity monitoring is "non-contact monitoring", which uses the Doppler effect (Doppler shift). A laser beam from a Laser Doppler Vibrometer is directed at the point of interest, and the vibration amplitude and frequency are extracted from the Doppler shift of the laser beam frequency due to the motion of the surface.[75] The advantage of this method is that the setup time for the equipment is faster and, unlike an accelerometer, this makes measurements possible on multiple structures in as short a time as possible. Additionally, this method can measure specific points on a bridge that might be difficult to access. However, vibrometers are relatively expensive and have the disadvantage that a reference point is needed to measure from.
|
103 |
+
|
104 |
+
Snapshots in time of the external condition of a bridge can be recorded using Lidar to aid bridge inspection.[76] This can provide measurement of the bridge geometry (to facilitate the building of a computer model) but the accuracy is generally insufficient to measure bridge deflections under load.
|
105 |
+
|
106 |
+
While larger modern bridges are routinely monitored electronically, smaller bridges are generally inspected visually by trained inspectors. There is considerable research interest in the challenge of smaller bridges as they are often remote and do not have electrical power on site. Possible solutions are the installation of sensors on a specialist inspection vehicle and the use of its measurements as it drives over the bridge to infer information about the bridge condition.[77][78][79] These vehicles can be equipped with accelerometers, gyrometers, Laser Doppler Vibrometers[80][81] and some even have the capability to apply a resonant force to the road surface in order to dynamically excite the bridge at its resonant frequency.
|
en/4732.html.txt
ADDED
@@ -0,0 +1,106 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
A bridge is a structure built to span a physical obstacle, such as a body of water, valley, or road, without closing the way underneath. It is constructed for the purpose of providing passage over the obstacle, usually something that can be detrimental to cross otherwise. There are many different designs that each serve a particular purpose and apply to different situations. Designs of bridges vary depending on the function of the bridge, the nature of the terrain where the bridge is constructed and anchored, the material used to make it, and the funds available to build it.
|
4 |
+
|
5 |
+
Most likely the earliest bridges were fallen trees and stepping stones, while Neolithic people built boardwalk bridges across marshland. The Arkadiko Bridge dating from the 13th century BC, in the Peloponnese, in southern Greece is one of the oldest arch bridges still in existence and use.
|
6 |
+
|
7 |
+
The Oxford English Dictionary traces the origin of the word bridge to an Old English word brycg, of the same meaning.[1] The word can be traced directly back to Proto-Indo-European *bʰrēw-. The word for the card game of the same name has a different origin.
|
8 |
+
|
9 |
+
The simplest type of a bridge is stepping stones, so this may have been one of the earliest types. Neolithic people also built a form of boardwalk across marshes, of which the Sweet Track and the Post Track, are examples from England that are around 6000 years old.[2] Undoubtedly ancient peoples would also have used log bridges; that is a timber bridge[3] that fall naturally or are intentionally felled or placed across streams. Some of the first man-made bridges with significant span were probably intentionally felled trees.[4]
|
10 |
+
|
11 |
+
Among the oldest timber bridges is the Holzbrücke Rapperswil-Hurden crossing upper Lake Zürich in Switzerland; the prehistoric timber piles discovered to the west of the Seedamm date back to 1523 BC. The first wooden footbridge led across Lake Zürich, followed by several reconstructions at least until the late 2nd century AD, when the Roman Empire built a 6-metre-wide (20 ft) wooden bridge. Between 1358 and 1360, Rudolf IV, Duke of Austria, built a 'new' wooden bridge across the lake that has been used to 1878 – measuring approximately 1,450 metres (4,760 ft) in length and 4 metres (13 ft) wide. On April 6, 2001, the reconstructed wooden footbridge was opened, being the longest wooden bridge in Switzerland.
|
12 |
+
|
13 |
+
The Arkadiko Bridge is one of four Mycenaean corbel arch bridges part of a former network of roads, designed to accommodate chariots, between the fort of Tiryns and town of Epidauros in the Peloponnese, in southern Greece. Dating to the Greek Bronze Age (13th century BC), it is one of the oldest arch bridges still in existence and use.
|
14 |
+
Several intact arched stone bridges from the Hellenistic era can be found in the Peloponnese.[5]
|
15 |
+
|
16 |
+
The greatest bridge builders of antiquity were the ancient Romans.[6] The Romans built arch bridges and aqueducts that could stand in conditions that would damage or destroy earlier designs. Some stand today.[7] An example is the Alcántara Bridge, built over the river Tagus, in Spain. The Romans also used cement, which reduced the variation of strength found in natural stone.[8] One type of cement, called pozzolana, consisted of water, lime, sand, and volcanic rock. Brick and mortar bridges were built after the Roman era, as the technology for cement was lost (then later rediscovered).
|
17 |
+
|
18 |
+
In India, the Arthashastra treatise by Kautilya mentions the construction of dams and bridges.[9] A Mauryan bridge near Girnar was surveyed by James Princep.[10] The bridge was swept away during a flood, and later repaired by Puspagupta, the chief architect of emperor Chandragupta I.[10] The use of stronger bridges using plaited bamboo and iron chain was visible in India by about the 4th century.[11] A number of bridges, both for military and commercial purposes, were constructed by the Mughal administration in India.[12]
|
19 |
+
|
20 |
+
Although large Chinese bridges of wooden construction existed at the time of the Warring States period, the oldest surviving stone bridge in China is the Zhaozhou Bridge, built from 595 to 605 AD during the Sui dynasty. This bridge is also historically significant as it is the world's oldest open-spandrel stone segmental arch bridge. European segmental arch bridges date back to at least the Alconétar Bridge (approximately 2nd century AD), while the enormous Roman era Trajan's Bridge (105 AD) featured open-spandrel segmental arches in wooden construction.[citation needed]
|
21 |
+
|
22 |
+
Rope bridges, a simple type of suspension bridge, were used by the Inca civilization in the Andes mountains of South America, just prior to European colonization in the 16th century.
|
23 |
+
|
24 |
+
During the 18th century there were many innovations in the design of timber bridges by Hans Ulrich Grubenmann, Johannes Grubenmann, and others. The first book on bridge engineering was written by Hubert Gautier in 1716.
|
25 |
+
|
26 |
+
A major breakthrough in bridge technology came with the erection of the Iron Bridge in Shropshire, England in 1779. It used cast iron for the first time as arches to cross the river Severn.[13] With the Industrial Revolution in the 19th century, truss systems of wrought iron were developed for larger bridges, but iron does not have the tensile strength to support large loads. With the advent of steel, which has a high tensile strength, much larger bridges were built, many using the ideas of Gustave Eiffel.[citation needed]
|
27 |
+
|
28 |
+
In Canada and the United States, numerous timber covered bridges were built in the late 1700s to the late 1800s, reminiscent of earlier designs in Germany and Switzerland. Some covered bridges were also built in Asia.[14] In later years, some were partly made of stone or metal but the trusses were usually still made of wood; in the United States, there were three styles of trusses, the Queen Post, the Burr Arch and the Town Lattice.[15] Hundreds of these structures still stand in North America. They were brought to the attention of the general public in the 1990s by the novel, movie, and play The Bridges of Madison County.[16][17]
|
29 |
+
|
30 |
+
In 1927 welding pioneer Stefan Bryła designed the first welded road bridge in the world, the Maurzyce Bridge which was later built across the river Słudwia at Maurzyce near Łowicz, Poland in 1929. In 1995, the American Welding Society presented the Historic Welded Structure Award for the bridge to Poland.[18]
|
31 |
+
|
32 |
+
Bridges can be categorized in several different ways. Common categories include the type of structural elements used, by what they carry, whether they are fixed or movable, and by the materials used.
|
33 |
+
|
34 |
+
Bridges may be classified by how the actions of tension, compression, bending, torsion and shear are distributed through their structure. Most bridges will employ all of these to some degree, but only a few will predominate. The separation of forces and moments may be quite clear. In a suspension or cable-stayed bridge, the elements in tension are distinct in shape and placement. In other cases the forces may be distributed among a large number of members, as in a truss.
|
35 |
+
|
36 |
+
The world's longest beam bridge is Lake Pontchartrain Causeway in southern Louisiana in the United States, at 23.83 miles (38.35 km), with individual spans of 56 feet (17 m).[21] Beam bridges are the simplest and oldest type of bridge in use today,[22] and are a popular type.[23]
|
37 |
+
|
38 |
+
Some cantilever bridges also have a smaller beam connecting the two cantilevers, for extra strength.
|
39 |
+
|
40 |
+
The largest cantilever bridge is the 549-metre (1,801 ft) Quebec Bridge in Quebec, Canada.
|
41 |
+
|
42 |
+
With the span of 220 metres (720 ft), the Solkan Bridge over the Soča River at Solkan in Slovenia is the second-largest stone bridge in the world and the longest railroad stone bridge. It was completed in 1905. Its arch, which was constructed from over 5,000 tonnes (4,900 long tons; 5,500 short tons) of stone blocks in just 18 days, is the second-largest stone arch in the world, surpassed only by the Friedensbrücke (Syratalviadukt) in Plauen, and the largest railroad stone arch. The arch of the Friedensbrücke, which was built in the same year, has the span of 90 m (295 ft) and crosses the valley of the Syrabach River. The difference between the two is that the Solkan Bridge was built from stone blocks, whereas the Friedensbrücke was built from a mixture of crushed stone and cement mortar.[24]
|
43 |
+
|
44 |
+
The world's largest arch bridge is the Chaotianmen Bridge over the Yangtze River with a length of 1,741 m (5,712 ft) and a span of 552 m (1,811 ft). The bridge was opened April 29, 2009, in Chongqing, China.[25]
|
45 |
+
|
46 |
+
The longest suspension bridge in the world is the 3,909 m (12,825 ft) Akashi Kaikyō Bridge in Japan.[27]
|
47 |
+
|
48 |
+
The longest cable-stayed bridge since 2012 is the 1,104 m (3,622 ft) Russky Bridge in Vladivostok, Russia.[31]
|
49 |
+
|
50 |
+
Some Engineers sub-divide 'beam' bridges into slab, beam-and-slab and box girder on the basis of their cross-section.[32] A slab can be solid or voided (though this is no longer favored for inspectability reasons) while beam-and-slab consists of concrete or steel girders connected by a concrete slab.[33] A box-girder cross-section consists of a single-cell or multi-cellular box. In recent years, integral bridge construction has also become popular.
|
51 |
+
|
52 |
+
Most bridges are fixed bridges, meaning they have no moving parts and stay in one place until they fail or are demolished. Temporary bridges, such as Bailey bridges, are designed to be assembled, and taken apart, transported to a different site, and re-used. They are important in military engineering, and are also used to carry traffic while an old bridge is being rebuilt. Movable bridges are designed to move out of the way of boats or other kinds of traffic, which would otherwise be too tall to fit. These are generally electrically powered.[34]
|
53 |
+
|
54 |
+
Double-decked (or double-decker) bridges have two levels, such as the George Washington Bridge, connecting New York City to Bergen County, New Jersey, US, as the world's busiest bridge, carrying 102 million vehicles annually;[35][36] truss work between the roadway levels provided stiffness to the roadways and reduced movement of the upper level when the lower level was installed three decades after the upper level. The Tsing Ma Bridge and Kap Shui Mun Bridge in Hong Kong have six lanes on their upper decks, and on their lower decks there are two lanes and a pair of tracks for MTR metro trains. Some double-decked bridges only use one level for street traffic; the Washington Avenue Bridge in Minneapolis reserves its lower level for automobile and light rail traffic and its upper level for pedestrian and bicycle traffic (predominantly students at the University of Minnesota). Likewise, in Toronto, the Prince Edward Viaduct has five lanes of motor traffic, bicycle lanes, and sidewalks on its upper deck; and a pair of tracks for the Bloor–Danforth subway line on its lower deck. The western span of the San Francisco–Oakland Bay Bridge also has two levels.
|
55 |
+
|
56 |
+
Robert Stephenson's High Level Bridge across the River Tyne in Newcastle upon Tyne, completed in 1849, is an early example of a double-decked bridge. The upper level carries a railway, and the lower level is used for road traffic. Other examples include Britannia Bridge over the Menai Strait and Craigavon Bridge in Derry, Northern Ireland. The Oresund Bridge between Copenhagen and Malmö consists of a four-lane highway on the upper level and a pair of railway tracks at the lower level. Tower Bridge in London is different example of a double-decked bridge, with the central section consisting of a low-level bascule span and a high-level footbridge.
|
57 |
+
|
58 |
+
A viaduct is made up of multiple bridges connected into one longer structure. The longest and some of the highest bridges are viaducts, such as the Lake Pontchartrain Causeway and Millau Viaduct.
|
59 |
+
|
60 |
+
A multi-way bridge has three or more separate spans which meet near the center of the bridge. Multi-way bridges with only three spans appear as a "T" or "Y" when viewed from above. Multi-way bridges are extremely rare. The Tridge, Margaret Bridge, and Zanesville Y-Bridge are examples.
|
61 |
+
|
62 |
+
|
63 |
+
|
64 |
+
A bridge can be categorized by what it is designed to carry, such as trains, pedestrian or road traffic (road bridge), a pipeline or waterway for water transport or barge traffic. An aqueduct is a bridge that carries water, resembling a viaduct, which is a bridge that connects points of equal height. A road-rail bridge carries both road and rail traffic. Overway is a term for a bridge that separates incompatible intersecting traffic, especially road and rail.[37] A bridge can carry overhead power lines as does the Storstrøm Bridge.[citation needed]
|
65 |
+
|
66 |
+
Some bridges accommodate other purposes, such as the tower of Nový Most Bridge in Bratislava, which features a restaurant, or a bridge-restaurant which is a bridge built to serve as a restaurant. Other suspension bridge towers carry transmission antennas.[citation needed]
|
67 |
+
|
68 |
+
Conservationists use wildlife overpasses to reduce habitat fragmentation and animal-vehicle collisions. The first animal bridges sprung up in France in the 1950s, and these types of bridges are now used worldwide to protect both large and small wildlife.[38][39][40]
|
69 |
+
|
70 |
+
Bridges are subject to unplanned uses as well. The areas underneath some bridges have become makeshift shelters and homes to homeless people, and the undertimbers of bridges all around the world are spots of prevalent graffiti. Some bridges attract people attempting suicide, and become known as suicide bridges.[citation needed][41]
|
71 |
+
|
72 |
+
The materials used to build the structure are also used to categorize bridges. Until the end of the 18th century, bridges were made out of timber, stone and masonry. Modern bridges are currently built in concrete, steel, fiber reinforced polymers (FRP), stainless steel or combinations of those materials. Living bridges have been constructed of live plants such as Ficus elastica tree roots in India[42] and wisteria vines in Japan.[43]
|
73 |
+
|
74 |
+
The Tank bridge transporter (TBT) has the same cross-country performance as a tank even when fully loaded. It can deploy, drop off and load bridges independently, but it cannot recover them.
|
75 |
+
|
76 |
+
Unlike buildings whose design is led by architects, bridges are usually designed by engineers. This follows from the importance of the engineering requirements; namely spanning the obstacle and having the durability to survive, with minimal maintenance, in an aggressive outdoor environment.[33] Bridges are first analysed; the bending moment and shear force distributions are calculated due to the applied loads. For this, the finite element method is the most popular. The analysis can be one, two or three-dimensional. For the majority of bridges, a two-dimensional plate model (often with stiffening beams) is sufficient or an upstand finite element model.[48] On completion of the analysis, the bridge is designed to resist the applied bending moments and shear forces, section sizes are selected with sufficient capacity to resist the stresses. Many bridges are made of prestressed concrete which has good durability properties, either by pre-tensioning of beams prior to installation or post-tensioning on site.
|
77 |
+
|
78 |
+
In most countries, bridges, like other structures, are designed according to Load and Resistance Factor Design (LRFD) principles. In simple terms, this means that the load is factored up by a factor greater than unity, while the resistance or capacity of the structure is factored down, by a factor less than unity. The effect of the factored load (stress, bending moment) should be less than the factored resistance to that effect. Both of these factors allow for uncertainty and are greater when the uncertainty is greater.
|
79 |
+
|
80 |
+
Most bridges are utilitarian in appearance, but in some cases, the appearance of the bridge can have great importance.[49] Often, this is the case with a large bridge that serves as an entrance to a city, or crosses over a main harbor entrance. These are sometimes known as signature bridges. Designers of bridges in parks and along parkways often place more importance to aesthetics, as well. Examples include the stone-faced bridges along the Taconic State Parkway in New York.
|
81 |
+
|
82 |
+
To create a beautiful image, some bridges are built much taller than necessary. This type, often found in east-Asian style gardens, is called a Moon bridge, evoking a rising full moon. Other garden bridges may cross only a dry bed of stream washed pebbles, intended only to convey an impression of a stream. Often in palaces a bridge will be built over an artificial waterway as symbolic of a passage to an important place or state of mind. A set of five bridges cross a sinuous waterway in an important courtyard of the Forbidden City in Beijing, China. The central bridge was reserved exclusively for the use of the Emperor and Empress, with their attendants.
|
83 |
+
|
84 |
+
Bridge maintenance consisting of a combination of structural health monitoring and testing. This is regulated in country-specific engineer standards and includes an ongoing monitoring every three to six months, a simple test or inspection every two to three years and a major inspection every six to ten years. In Europe, the cost of maintenance is considerable[32] and is higher in some countries than spending on new bridges. The lifetime of welded steel bridges can be significantly extended by aftertreatment of the weld transitions. This results in a potential high benefit, using existing bridges far beyond the planned lifetime.
|
85 |
+
|
86 |
+
While the response of a bridge to the applied loading is well understood, the applied traffic loading itself is still the subject of research.[50] This is a statistical problem as loading is highly variable, particularly for road bridges. Load Effects in bridges (stresses, bending moments) are designed for using the principles of Load and Resistance Factor Design. Before factoring to allow for uncertainty, the load effect is generally considered to be the maximum characteristic value in a specified return period. Notably, in Europe, it is the maximum value expected in 1000 years.
|
87 |
+
|
88 |
+
Bridge standards generally include a load model, deemed to represent the characteristic maximum load to be expected in the return period. In the past, these load models were agreed by standard drafting committees of experts but today, this situation is changing. It is now possible to measure the components of bridge traffic load, to weigh trucks, using weigh-in-motion (WIM) technologies. With extensive WIM databases, it is possible to calculate the maximum expected load effect in the specified return period. This is an active area of research, addressing issues of opposing direction lanes,[51][52] side-by-side (same direction) lanes,[53][54] traffic growth,[55] permit/non-permit vehicles[56] and long-span bridges (see below). Rather than repeat this complex process every time a bridge is to be designed, standards authorities specify simplified notional load models, notably HL-93,[57][58] intended to give the same load effects as the characteristic maximum values. The Eurocode is an example of a standard for bridge traffic loading that was developed in this way.[59]
|
89 |
+
|
90 |
+
Most bridge standards are only applicable for short and medium spans[60] - for example, the Eurocode is only applicable for loaded lengths up to 200 m. Longer spans are dealt with on a case by case basis. It is generally accepted that the intensity of load reduces as span increases because the probability of many trucks being closely spaced and extremely heavy reduces as the number of trucks involved increases. It is also generally assumed that short spans are governed by a small number of trucks traveling at high speed, with an allowance for dynamics. Longer spans on the other hand, are governed by congested traffic and no allowance for dynamics is needed. Calculating the loading due to congested traffic remains a challenge as there is a paucity of data on inter-vehicle gaps, both within-lane and inter-lane, in congested conditions. Weigh-in-Motion (WIM) systems provide data on inter-vehicle gaps but only operate well in free flowing traffic conditions. Some authors have used cameras to measure gaps and vehicle lengths in jammed situations and have inferred weights from lengths using WIM data.[61] Others have used microsimulation to generate typical clusters of vehicles on the bridge.[62][63][64]
|
91 |
+
|
92 |
+
Bridges vibrate under load and this contributes, to a greater or lesser extent, to the stresses.[33] Vibration and dynamics are generally more significant for slender structures such as pedestrian bridges and long-span road or rail bridges. One of the most famous examples is the Tacoma Narrows Bridge that collapsed shortly after being constructed due to excessive vibration. More recently, the Millennium Bridge in London vibrated excessively under pedestrian loading and was closed and retrofitted with a system of dampers. For smaller bridges, dynamics is not catastrophic but can contribute an added amplification to the stresses due to static effects. For example, the Eurocode for bridge loading specifies amplifications of between 10% and 70%, depending on the span, the number of traffic lanes and the type of stress (bending moment or shear force).[65]
|
93 |
+
|
94 |
+
There have been many studies of the dynamic interaction between vehicles and bridges during vehicle crossing events. Fryba[66] did pioneering work on the interaction of a moving load and an Euler-Bernoulli beam. With increased computing power, vehicle-bridge interaction (VBI) models have become ever more sophisticated.[67][68][69][70] The concern is that one of the many natural frequencies associated with the vehicle will resonate with the bridge first natural frequency.[71] The vehicle-related frequencies include body bounce and axle hop but there are also pseudo-frequencies associated with the vehicle's speed of crossing[72] and there are many frequencies associated with the surface profile.[50] Given the wide variety of heavy vehicles on road bridges, a statistical approach has been suggested, with VBI analyses carried out for many statically extreme loading events.[73]
|
95 |
+
|
96 |
+
The failure of bridges is of special concern for structural engineers in trying to learn lessons vital to bridge design, construction and maintenance. The failure of bridges first assumed national interest during the Victorian era when many new designs were being built, often using new materials.
|
97 |
+
|
98 |
+
In the United States, the National Bridge Inventory tracks the structural evaluations of all bridges, including designations such as "structurally deficient" and "functionally obsolete".
|
99 |
+
|
100 |
+
There are several methods used to monitor the condition of large structures like bridges. Many long-span bridges are now routinely monitored with a range of sensors. Many types of sensors are used, including strain transducers, accelerometers,[74] tiltmeters, and GPS. Accelerometers have the advantage that they are inertial, i.e., they do not require a reference point to measure from. This is often a problem for distance or deflection measurement, especially if the bridge is over water.
|
101 |
+
|
102 |
+
An option for structural-integrity monitoring is "non-contact monitoring", which uses the Doppler effect (Doppler shift). A laser beam from a Laser Doppler Vibrometer is directed at the point of interest, and the vibration amplitude and frequency are extracted from the Doppler shift of the laser beam frequency due to the motion of the surface.[75] The advantage of this method is that the setup time for the equipment is faster and, unlike an accelerometer, this makes measurements possible on multiple structures in as short a time as possible. Additionally, this method can measure specific points on a bridge that might be difficult to access. However, vibrometers are relatively expensive and have the disadvantage that a reference point is needed to measure from.
|
103 |
+
|
104 |
+
Snapshots in time of the external condition of a bridge can be recorded using Lidar to aid bridge inspection.[76] This can provide measurement of the bridge geometry (to facilitate the building of a computer model) but the accuracy is generally insufficient to measure bridge deflections under load.
|
105 |
+
|
106 |
+
While larger modern bridges are routinely monitored electronically, smaller bridges are generally inspected visually by trained inspectors. There is considerable research interest in the challenge of smaller bridges as they are often remote and do not have electrical power on site. Possible solutions are the installation of sensors on a specialist inspection vehicle and the use of its measurements as it drives over the bridge to infer information about the bridge condition.[77][78][79] These vehicles can be equipped with accelerometers, gyrometers, Laser Doppler Vibrometers[80][81] and some even have the capability to apply a resonant force to the road surface in order to dynamically excite the bridge at its resonant frequency.
|
en/4733.html.txt
ADDED
@@ -0,0 +1,71 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
|
3 |
+
|
4 |
+
|
5 |
+
Popcorn (popped corn, popcorns or pop-corn) is a variety of corn kernel which expands and puffs up when heated; the same names are also used to refer to the foodstuff produced by the expansion.
|
6 |
+
|
7 |
+
A popcorn kernel's strong hull contains the seed's hard, starchy shell endosperm with 14–20% moisture, which turns to steam as the kernel is heated. Pressure from the steam continues to build until the hull ruptures, allowing the kernel to forcefully expand, from 20 to 50 times its original size, and then cool.[1]
|
8 |
+
|
9 |
+
Some strains of corn (taxonomized as Zea mays) are cultivated specifically as popping corns. The Zea mays variety everta, a special kind of flint corn, is the most common of these.
|
10 |
+
|
11 |
+
Popcorn is one of the six major types of corn, which includes dent corn, flint corn, pod corn, flour corn, and sweet corn.[2]
|
12 |
+
|
13 |
+
Corn was domesticated about 10,000 years ago in what is now Mexico.[3] Archaeologists discovered that people have known about popcorn for thousands of years. In Mexico, for example, remnants of popcorn have been found that date circa 3600 BC.[4]
|
14 |
+
|
15 |
+
Through the 19th century, popping of the kernels was achieved by hand on stove tops. Kernels were sold on the East Coast of the United States under names such as Pearls or Nonpareil. The term popped corn first appeared in John Russell Bartlett's 1848 Dictionary of Americanisms.[5][6] Popcorn is an ingredient in Cracker Jack and, in the early years of the product, it was popped by hand.[5]
|
16 |
+
|
17 |
+
Popcorn's accessibility increased rapidly in the 1890s with Charles Cretors' invention of the popcorn maker. Cretors, a Chicago candy store owner, had created a number of steam-powered machines for roasting nuts and applied the technology to the corn kernels. By the turn of the century, Cretors had created and deployed street carts equipped with steam-powered popcorn makers.[7]
|
18 |
+
|
19 |
+
During the Great Depression, popcorn was fairly inexpensive at 5–10 cents a bag and became popular. Thus, while other businesses failed, the popcorn business thrived and became a source of income for many struggling farmers, including the Redenbacher family, namesake of the famous popcorn brand. During World War II, sugar rations diminished candy production, and Americans compensated by eating three times as much popcorn as they had before.[8] The snack was popular at theaters, much to the initial displeasure of many of the theater owners, who thought it distracted from the films. Their minds eventually changed, however, and in 1938 a Midwestern theater owner named Glen W. Dickinson Sr. installed popcorn machines in the lobbies of his Dickinson theaters. Popcorn was making more profit than theater tickets, and at the suggestion of his production consultant, R. Ray Aden, Dickinson purchased popcorn farms and was able to keep ticket prices down. The venture was a financial success, and the trend to serve popcorn soon spread.[5]
|
20 |
+
|
21 |
+
In 1970, Orville Redenbacher's namesake brand of popcorn was launched. In 1981, General Mills received the first patent for a microwave popcorn bag; popcorn consumption saw a sharp increase, by tens of thousands of pounds, in the years following.[7]
|
22 |
+
|
23 |
+
At least six localities (all in the Midwestern United States) claim to be the "Popcorn Capital of the World;": Ridgway, Illinois; Valparaiso, Indiana; Van Buren, Indiana; Schaller, Iowa; Marion, Ohio; and North Loup, Nebraska. According to the USDA, corn used for popcorn production is specifically planted for this purpose; most is grown in Nebraska and Indiana, with increasing area in Texas.[9][10] As the result of an elementary school project, popcorn became the official state snack food of Illinois.[11]
|
24 |
+
|
25 |
+
Each kernel of popcorn contains a certain amount of moisture and oil. Unlike most other grains, the outer hull of the popcorn kernel is both strong and impervious to moisture and the starch inside consists almost entirely of a hard type.[12]
|
26 |
+
|
27 |
+
As the oil and water within the kernel are heated, they turn the moisture in the kernel into pressurized steam. Under these conditions, the starch inside the kernel gelatinizes, softens, and becomes pliable. The internal pressure of the entrapped steam continues to increase until the breaking point of the hull is reached: a pressure of approximately 135 psi (930 kPa)[12] and a temperature of 180 °C (356 °F). The hull thereupon ruptures rapidly and explodes, causing a sudden drop in pressure inside the kernel and a corresponding rapid expansion of the steam, which expands the starch and proteins of the endosperm into airy foam. As the foam rapidly cools, the starch and protein polymers set into the familiar crispy puff.[12] Special varieties are grown to give improved popping yield. Though the kernels of some wild types will pop, the cultivated strain is Zea mays everta, which is a special kind of flint corn.
|
28 |
+
|
29 |
+
Popcorn can be cooked with butter or oil. Although small quantities can be popped in a stove-top kettle or pot in a home kitchen, commercial sale of freshly popped popcorn employs specially designed popcorn machines, which were invented in Chicago, Illinois, by Charles Cretors in 1885. Cretors successfully introduced his invention at the Columbian Exposition in 1893. At this same world's fair, F.W. Rueckheim introduced a molasses-flavored "Candied Popcorn", the first caramel corn; his brother, Louis Ruekheim, slightly altered the recipe and introduced it as Cracker Jack popcorn in 1896.[13]
|
30 |
+
|
31 |
+
Cretors's invention introduced the first patented steam-driven popcorn machine that popped corn in oil. Previously, vendors popped corn by holding a wire basket over an open flame. At best, the result was a hot, dry, unevenly cooked snack. Cretors's machine popped corn in a mixture of one-third clarified butter, two-thirds lard, and salt. This mixture can withstand the 450 °F (232 °C) temperature needed to pop corn and it produces little smoke. A fire under a boiler created steam that drove a small engine; that engine drove the gears, shaft, and agitator that stirred the corn and powered a small automated clown puppet-like figure, "the Toasty Roasty Man", an attention-getting amusement intended to attract business. A wire connected to the top of the cooking pan allowed the operator to disengage the drive mechanism, lift the cover, and dump popped corn into the storage bin beneath. Exhaust from the steam engine was piped to a hollow pan below the corn storage bin and kept freshly popped corn uniformly warm. Excess steam was also used to operate a small, shrill whistle to attract attention.[14]
|
32 |
+
|
33 |
+
A different method of popcorn-making involves the "popcorn hammer", a large cast-iron canister that is sealed with a heavy lid and slowly turned over a fire in rotisserie fashion.
|
34 |
+
|
35 |
+
Popping results are sensitive to the rate at which the kernels are heated. If heated too quickly, the steam in the outer layers of the kernel can reach high pressures and rupture the hull before the starch in the center of the kernel can fully gelatinize, leading to partially popped kernels with hard centers. Heating too slowly leads to entirely unpopped kernels: the tip of the kernel, where it attached to the cob, is not entirely moisture-proof, and when heated slowly, the steam can leak out of the tip fast enough to keep the pressure from rising sufficiently to break the hull and cause the pop.[15]
|
36 |
+
|
37 |
+
Producers and sellers of popcorn consider two major factors in evaluating the quality of popcorn: what percentage of the kernels will pop, and how much each popped kernel expands. Expansion is an important factor to both the consumer and vendor. For the consumer, larger pieces of popcorn tend to be more tender and are associated with higher quality. For the grower, distributor and vendor, expansion is closely correlated with profit: vendors such as theaters buy popcorn by weight and sell it by volume. For these reasons, higher-expansion popcorn fetches a higher profit per unit weight.
|
38 |
+
|
39 |
+
Popcorn will pop when freshly harvested, but not well; its high moisture content leads to poor expansion and chewy pieces of popcorn. Kernels with a high moisture content are also susceptible to mold when stored. For these reasons, popcorn growers and distributors dry the kernels until they reach the moisture level at which they expand the most. This differs by variety and conditions, but is generally in the range of 14–15% moisture by weight. If the kernels are over-dried, the expansion rate will suffer and the percentage of kernels that pop will decline.
|
40 |
+
|
41 |
+
When the popcorn has finished popping, sometimes unpopped kernels remain. Known in the popcorn industry as "old maids",[16] these kernels fail to pop because they do not have enough moisture to create enough steam for an explosion. Re-hydrating prior to popping usually results in eliminating the unpopped kernels.
|
42 |
+
|
43 |
+
Popcorn varieties are broadly categorized by the shape of the kernels, the color of the kernels, or the shape of the popped corn. While the kernels may come in a variety of colors, the popped corn is always off-yellow or white as it is only the hull (or pericarp) that is colored. "Rice" type popcorn have a long kernel pointed at both ends; "pearl" type kernels are rounded at the top. Commercial popcorn production has moved mostly to pearl types.[17] Historically, pearl popcorn were usually yellow and rice popcorn usually white. Today both shapes are available in both colors, as well as others including black, red, mauve, purple, and variegated. Mauve and purple popcorn usually have smaller and nutty kernels. Commercial production is dominated by white and yellow.[18]
|
44 |
+
|
45 |
+
In the popcorn industry, a popped kernel of corn is known as a "flake". Two shapes of flakes are commercially important. "Butterfly" (or "snowflake")[19] flakes are irregular in shape and have a number of protruding "wings". "Mushroom" flakes are largely ball-shaped, with few wings. Butterfly flakes are regarded as having better mouthfeel, with greater tenderness and less noticeable hulls. Mushroom flakes are less fragile than butterfly flakes and are therefore often used for packaged popcorn or confectionery, such as caramel corn.[18] The kernels from a single cob of popcorn may form both butterfly and mushroom flakes; hybrids that produce 100% butterfly flakes or 100% mushroom flakes exist, the latter developed only as recently as 1998.[18] Growing conditions and popping environment can also affect the butterfly-to-mushroom ratio.
|
46 |
+
|
47 |
+
When referring to multiple pieces of popcorn collectively, it is acceptable to use the term "popcorn". When referring to a singular piece of popcorn, the accepted term is "kernel".
|
48 |
+
|
49 |
+
Popcorn is a popular snack food at sporting events and in movie theaters, where it has been served since the 1930s.[20] Cinemas have come under fire due to their high markup on popcorn; Stuart Hanson, a film historian at De Montfort University in Leicester, once said, "One of the great jokes in the industry is that popcorn is second only to cocaine or heroin in terms of profit."[21]
|
50 |
+
|
51 |
+
Traditions differ as to whether popcorn is consumed as a hearty snack food with salt (predominating in the United States) or as a sweet snack food with caramelized sugar (predominating in Germany).
|
52 |
+
|
53 |
+
Popcorn smell has an unusually attractive quality for human beings. This is largely because it contains high levels of the chemicals 6-acetyl-2,3,4,5-tetrahydropyridine and 2-acetyl-1-pyrroline, very powerful aroma compounds that are also used by food and other industries either to make products that smell like popcorn, bread, or other foods containing the compound in nature, or for other purposes.[citation needed]
|
54 |
+
|
55 |
+
Popcorn as a breakfast cereal was consumed by Americans in the 1800s and generally consisted of popcorn with milk and a sweetener.[22]
|
56 |
+
|
57 |
+
Popcorn balls (popped kernels stuck together with a sugary "glue") were hugely popular around the turn of the 20th century, but their popularity has since waned. Popcorn balls are still served in some places as a traditional Halloween treat. Cracker Jack is a popular, commercially produced candy that consists of peanuts mixed in with caramel-covered popcorn. Kettle corn is a variation of normal popcorn, cooked with white sugar and salt, traditionally in a large copper kettle. Once reserved for specialty shops and county fairs, kettle corn has recently become popular, especially in the microwave popcorn market. The popcorn maker is a relatively new home appliance, and its popularity is increasing because it offers the opportunity to add flavors of the consumer's own choice and to choose healthy-eating popcorn styles.
|
58 |
+
|
59 |
+
Air-popped popcorn is naturally high in dietary fiber and antioxidants,[23] low in calories and fat, and free of sugar and sodium.[24] This can make it an attractive snack to people with dietary restrictions on the intake of calories, fat or sodium. For the sake of flavor, however, large amounts of fat, sugar, and sodium are often added to prepared popcorn, which can quickly convert it to a very poor choice for those on restricted diets.
|
60 |
+
|
61 |
+
One example of this first came to public attention in the mid-1990s, when the Center for Science in the Public Interest produced a report about "Movie Popcorn", which became the subject of a widespread publicity campaign. The movie theaters surveyed used coconut oil to pop the corn, and then topped it with butter or margarine. "A medium-size buttered popcorn", the report said, "contains more fat than a breakfast of bacon and eggs, a Big Mac and fries, and a steak dinner combined".[25] The practice continues today. For example, according to DietFacts.com, a small popcorn from Regal Cinema Group (the largest theater chain in the United States)[26] still contains 29 g of saturated fat.[27] the equivalent of a full day-and-a-half's reference daily intake.[28]
|
62 |
+
|
63 |
+
In studies conducted by the Motion Picture Association of America it was found that the average American attends six movies a year and that movie theater popcorn and other movie theater snacks are viewed as a treat, not intended to be part of a regular diet.[29]
|
64 |
+
|
65 |
+
Popcorn is included on the list of foods that the American Academy of Pediatrics recommends not serving to children under four, because of the risk of choking.[30]
|
66 |
+
|
67 |
+
Microwaveable popcorn represents a special case, since it is designed to be cooked along with its various flavoring agents. One of these formerly common artificial-butter flavorants, diacetyl, has been implicated in causing respiratory illnesses in microwave popcorn factory workers, also known as "popcorn lung". Major manufacturers in the United States have stopped using this chemical, including Orville Redenbacher's, Act II, Pop Secret and Jolly Time.[citation needed][31][32]
|
68 |
+
|
69 |
+
Popcorn, threaded onto a string, is used as a wall or Christmas tree decoration in some parts of North America,[33][34] as well as on the Balkan peninsula.[35]
|
70 |
+
|
71 |
+
Some shipping companies have experimented with using popcorn as a biodegradable replacement for expanded polystyrene packing material. However, popcorn has numerous undesirable properties as a packing material, including attractiveness to pests, flammability, and a higher cost and greater density than expanded polystyrene. A more processed form of expanded corn foam has been developed to overcome some of these limitations.[36]
|